Dec 09 11:26:56 crc systemd[1]: Starting Kubernetes Kubelet... Dec 09 11:26:56 crc restorecon[4652]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:56 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 11:26:57 crc restorecon[4652]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 11:26:57 crc restorecon[4652]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 09 11:26:58 crc kubenswrapper[4849]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 11:26:58 crc kubenswrapper[4849]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 09 11:26:58 crc kubenswrapper[4849]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 11:26:58 crc kubenswrapper[4849]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 11:26:58 crc kubenswrapper[4849]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 09 11:26:58 crc kubenswrapper[4849]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.366572 4849 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369565 4849 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369586 4849 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369591 4849 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369594 4849 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369598 4849 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369602 4849 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369607 4849 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369611 4849 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369615 4849 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369620 4849 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369624 4849 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369628 4849 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369632 4849 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369636 4849 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369639 4849 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369650 4849 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369653 4849 feature_gate.go:330] unrecognized feature gate: Example Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369657 4849 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369660 4849 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369664 4849 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369667 4849 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369671 4849 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369676 4849 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369681 4849 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369685 4849 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369689 4849 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369693 4849 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369697 4849 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369701 4849 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369704 4849 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369707 4849 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369711 4849 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369714 4849 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369718 4849 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369721 4849 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369725 4849 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369729 4849 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369732 4849 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369738 4849 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369741 4849 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369747 4849 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369751 4849 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369755 4849 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369760 4849 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369764 4849 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369767 4849 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369771 4849 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369775 4849 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369779 4849 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369782 4849 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369786 4849 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369789 4849 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369793 4849 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369797 4849 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369801 4849 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369804 4849 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369808 4849 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369812 4849 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369815 4849 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369819 4849 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369822 4849 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369826 4849 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369830 4849 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369834 4849 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369837 4849 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369840 4849 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369844 4849 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369849 4849 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369853 4849 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369857 4849 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.369860 4849 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.369945 4849 flags.go:64] FLAG: --address="0.0.0.0" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.369955 4849 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.369963 4849 flags.go:64] FLAG: --anonymous-auth="true" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.369969 4849 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.369975 4849 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.369979 4849 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.369984 4849 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.369990 4849 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.369995 4849 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.369999 4849 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370003 4849 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370007 4849 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370012 4849 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370016 4849 flags.go:64] FLAG: --cgroup-root="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370020 4849 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370025 4849 flags.go:64] FLAG: --client-ca-file="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370029 4849 flags.go:64] FLAG: --cloud-config="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370033 4849 flags.go:64] FLAG: --cloud-provider="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370037 4849 flags.go:64] FLAG: --cluster-dns="[]" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370042 4849 flags.go:64] FLAG: --cluster-domain="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370046 4849 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370050 4849 flags.go:64] FLAG: --config-dir="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370054 4849 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370059 4849 flags.go:64] FLAG: --container-log-max-files="5" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370065 4849 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370070 4849 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370074 4849 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370085 4849 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370089 4849 flags.go:64] FLAG: --contention-profiling="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370094 4849 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370098 4849 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370102 4849 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370107 4849 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370113 4849 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370117 4849 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370122 4849 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370126 4849 flags.go:64] FLAG: --enable-load-reader="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370130 4849 flags.go:64] FLAG: --enable-server="true" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370134 4849 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370139 4849 flags.go:64] FLAG: --event-burst="100" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370144 4849 flags.go:64] FLAG: --event-qps="50" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370148 4849 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370152 4849 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370156 4849 flags.go:64] FLAG: --eviction-hard="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370161 4849 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370165 4849 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370169 4849 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370174 4849 flags.go:64] FLAG: --eviction-soft="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370177 4849 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370181 4849 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370185 4849 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370189 4849 flags.go:64] FLAG: --experimental-mounter-path="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370193 4849 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370201 4849 flags.go:64] FLAG: --fail-swap-on="true" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370205 4849 flags.go:64] FLAG: --feature-gates="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370210 4849 flags.go:64] FLAG: --file-check-frequency="20s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370214 4849 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370218 4849 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370223 4849 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370228 4849 flags.go:64] FLAG: --healthz-port="10248" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370232 4849 flags.go:64] FLAG: --help="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370236 4849 flags.go:64] FLAG: --hostname-override="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370240 4849 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370244 4849 flags.go:64] FLAG: --http-check-frequency="20s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370249 4849 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370253 4849 flags.go:64] FLAG: --image-credential-provider-config="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370257 4849 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370262 4849 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370266 4849 flags.go:64] FLAG: --image-service-endpoint="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370271 4849 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370274 4849 flags.go:64] FLAG: --kube-api-burst="100" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370278 4849 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370283 4849 flags.go:64] FLAG: --kube-api-qps="50" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370287 4849 flags.go:64] FLAG: --kube-reserved="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370291 4849 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370295 4849 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370299 4849 flags.go:64] FLAG: --kubelet-cgroups="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370303 4849 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370307 4849 flags.go:64] FLAG: --lock-file="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370311 4849 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370316 4849 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370320 4849 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370326 4849 flags.go:64] FLAG: --log-json-split-stream="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370330 4849 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370334 4849 flags.go:64] FLAG: --log-text-split-stream="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370339 4849 flags.go:64] FLAG: --logging-format="text" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370343 4849 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370348 4849 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370352 4849 flags.go:64] FLAG: --manifest-url="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370356 4849 flags.go:64] FLAG: --manifest-url-header="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370362 4849 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370367 4849 flags.go:64] FLAG: --max-open-files="1000000" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370372 4849 flags.go:64] FLAG: --max-pods="110" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370376 4849 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370380 4849 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370384 4849 flags.go:64] FLAG: --memory-manager-policy="None" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370388 4849 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370392 4849 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370396 4849 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370400 4849 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370424 4849 flags.go:64] FLAG: --node-status-max-images="50" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370429 4849 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370433 4849 flags.go:64] FLAG: --oom-score-adj="-999" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370437 4849 flags.go:64] FLAG: --pod-cidr="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370442 4849 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370449 4849 flags.go:64] FLAG: --pod-manifest-path="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370453 4849 flags.go:64] FLAG: --pod-max-pids="-1" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370457 4849 flags.go:64] FLAG: --pods-per-core="0" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370462 4849 flags.go:64] FLAG: --port="10250" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370466 4849 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370470 4849 flags.go:64] FLAG: --provider-id="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370474 4849 flags.go:64] FLAG: --qos-reserved="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370478 4849 flags.go:64] FLAG: --read-only-port="10255" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370482 4849 flags.go:64] FLAG: --register-node="true" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370486 4849 flags.go:64] FLAG: --register-schedulable="true" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370490 4849 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370498 4849 flags.go:64] FLAG: --registry-burst="10" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370502 4849 flags.go:64] FLAG: --registry-qps="5" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370506 4849 flags.go:64] FLAG: --reserved-cpus="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370510 4849 flags.go:64] FLAG: --reserved-memory="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370515 4849 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370520 4849 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370525 4849 flags.go:64] FLAG: --rotate-certificates="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370529 4849 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370533 4849 flags.go:64] FLAG: --runonce="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370536 4849 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370541 4849 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370547 4849 flags.go:64] FLAG: --seccomp-default="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370551 4849 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370556 4849 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370560 4849 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370564 4849 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370569 4849 flags.go:64] FLAG: --storage-driver-password="root" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370573 4849 flags.go:64] FLAG: --storage-driver-secure="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370577 4849 flags.go:64] FLAG: --storage-driver-table="stats" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370581 4849 flags.go:64] FLAG: --storage-driver-user="root" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370585 4849 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370589 4849 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370593 4849 flags.go:64] FLAG: --system-cgroups="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370597 4849 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370607 4849 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370611 4849 flags.go:64] FLAG: --tls-cert-file="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370615 4849 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370620 4849 flags.go:64] FLAG: --tls-min-version="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370624 4849 flags.go:64] FLAG: --tls-private-key-file="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370628 4849 flags.go:64] FLAG: --topology-manager-policy="none" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370632 4849 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370636 4849 flags.go:64] FLAG: --topology-manager-scope="container" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370640 4849 flags.go:64] FLAG: --v="2" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370646 4849 flags.go:64] FLAG: --version="false" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370652 4849 flags.go:64] FLAG: --vmodule="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370658 4849 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.370664 4849 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370755 4849 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370760 4849 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370764 4849 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370767 4849 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370771 4849 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370775 4849 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370780 4849 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370784 4849 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370787 4849 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370791 4849 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370794 4849 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370798 4849 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370801 4849 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370804 4849 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370808 4849 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370811 4849 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370815 4849 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370819 4849 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370823 4849 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370827 4849 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370831 4849 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370834 4849 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370837 4849 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370841 4849 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370845 4849 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370848 4849 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370852 4849 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370855 4849 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370859 4849 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370863 4849 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370866 4849 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370870 4849 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370874 4849 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370877 4849 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370880 4849 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370884 4849 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370887 4849 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370891 4849 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370896 4849 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370899 4849 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370902 4849 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370907 4849 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370912 4849 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370916 4849 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370920 4849 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370924 4849 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370928 4849 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370933 4849 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370937 4849 feature_gate.go:330] unrecognized feature gate: Example Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370941 4849 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370945 4849 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370950 4849 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370954 4849 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370957 4849 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370961 4849 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370964 4849 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370967 4849 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370971 4849 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370975 4849 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370978 4849 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370982 4849 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370991 4849 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370995 4849 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.370999 4849 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.371003 4849 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.371007 4849 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.371012 4849 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.371015 4849 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.371019 4849 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.371022 4849 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.371027 4849 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.371038 4849 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.378264 4849 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.378292 4849 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378356 4849 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378363 4849 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378368 4849 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378372 4849 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378375 4849 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378379 4849 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378382 4849 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378386 4849 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378390 4849 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378394 4849 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378397 4849 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378401 4849 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378404 4849 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378426 4849 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378432 4849 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378436 4849 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378444 4849 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378452 4849 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378456 4849 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378460 4849 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378465 4849 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378469 4849 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378473 4849 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378477 4849 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378481 4849 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378485 4849 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378489 4849 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378494 4849 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378498 4849 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378502 4849 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378506 4849 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378510 4849 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378515 4849 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378518 4849 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378524 4849 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378531 4849 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378543 4849 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378549 4849 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378554 4849 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378559 4849 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378563 4849 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378567 4849 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378571 4849 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378575 4849 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378579 4849 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378583 4849 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378587 4849 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378590 4849 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378594 4849 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378600 4849 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378606 4849 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378611 4849 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378615 4849 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378620 4849 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378625 4849 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378629 4849 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378634 4849 feature_gate.go:330] unrecognized feature gate: Example Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378638 4849 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378643 4849 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378647 4849 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378651 4849 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378654 4849 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378658 4849 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378661 4849 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378666 4849 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378670 4849 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378674 4849 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378677 4849 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378681 4849 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378684 4849 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378688 4849 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.378695 4849 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378797 4849 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378803 4849 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378808 4849 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378813 4849 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378817 4849 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378821 4849 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378825 4849 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378828 4849 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378832 4849 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378835 4849 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378838 4849 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378842 4849 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378845 4849 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378849 4849 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378853 4849 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378856 4849 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378860 4849 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378864 4849 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378867 4849 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378871 4849 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378874 4849 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378878 4849 feature_gate.go:330] unrecognized feature gate: Example Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378881 4849 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378884 4849 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378888 4849 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378892 4849 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378895 4849 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378899 4849 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378903 4849 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378907 4849 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378913 4849 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378921 4849 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378926 4849 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378931 4849 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378936 4849 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378941 4849 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378946 4849 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378951 4849 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378955 4849 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378960 4849 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378965 4849 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378969 4849 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378973 4849 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378977 4849 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378982 4849 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378986 4849 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378990 4849 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378994 4849 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.378998 4849 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379002 4849 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379008 4849 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379012 4849 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379018 4849 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379024 4849 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379029 4849 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379037 4849 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379041 4849 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379045 4849 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379048 4849 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379052 4849 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379056 4849 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379060 4849 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379064 4849 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379067 4849 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379071 4849 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379074 4849 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379078 4849 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379081 4849 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379084 4849 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379088 4849 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.379091 4849 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.379098 4849 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.379257 4849 server.go:940] "Client rotation is on, will bootstrap in background" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.381616 4849 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.381691 4849 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.382154 4849 server.go:997] "Starting client certificate rotation" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.382174 4849 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.382655 4849 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-27 22:33:47.964611507 +0000 UTC Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.382718 4849 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.392257 4849 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 11:26:58 crc kubenswrapper[4849]: E1209 11:26:58.397768 4849 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.398379 4849 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.413902 4849 log.go:25] "Validated CRI v1 runtime API" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.437227 4849 log.go:25] "Validated CRI v1 image API" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.439036 4849 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.440964 4849 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-09-11-20-57-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.441091 4849 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.453842 4849 manager.go:217] Machine: {Timestamp:2025-12-09 11:26:58.452551389 +0000 UTC m=+0.992435725 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199472640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:28952ea2-405f-4451-ba01-96f0d1c5ff80 BootID:6e561bc1-3071-42d3-8f8a-26cb48f3e35f Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076107 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599734272 Type:vfs Inodes:3076107 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039894528 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599738368 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b5:8d:53 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b5:8d:53 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:38:19:eb Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:5e:eb:47 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:60:9a:1b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:8f:12:81 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:d8:0d:8a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:de:4d:b0:42:93:c5 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:56:bd:aa:d6:4e:df Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199472640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.454324 4849 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.454573 4849 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.462309 4849 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.462698 4849 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.462789 4849 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.463078 4849 topology_manager.go:138] "Creating topology manager with none policy" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.463133 4849 container_manager_linux.go:303] "Creating device plugin manager" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.463372 4849 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.463500 4849 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.463849 4849 state_mem.go:36] "Initialized new in-memory state store" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.464035 4849 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.466740 4849 kubelet.go:418] "Attempting to sync node with API server" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.466817 4849 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.466879 4849 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.466945 4849 kubelet.go:324] "Adding apiserver pod source" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.467002 4849 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.468620 4849 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.469039 4849 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.470084 4849 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.470740 4849 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.470768 4849 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.470779 4849 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.470786 4849 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.470798 4849 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.470810 4849 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.470817 4849 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.470827 4849 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.470838 4849 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.470846 4849 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.470900 4849 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.470908 4849 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.470926 4849 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.471700 4849 server.go:1280] "Started kubelet" Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.471702 4849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 09 11:26:58 crc kubenswrapper[4849]: E1209 11:26:58.471854 4849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.471702 4849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 09 11:26:58 crc kubenswrapper[4849]: E1209 11:26:58.471934 4849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.472052 4849 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.472348 4849 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.472470 4849 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 09 11:26:58 crc systemd[1]: Started Kubernetes Kubelet. Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.474521 4849 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 09 11:26:58 crc kubenswrapper[4849]: E1209 11:26:58.476894 4849 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f8875cc38cfad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 11:26:58.471350189 +0000 UTC m=+1.011234505,LastTimestamp:2025-12-09 11:26:58.471350189 +0000 UTC m=+1.011234505,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.477694 4849 server.go:460] "Adding debug handlers to kubelet server" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.478436 4849 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.478570 4849 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.479386 4849 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:16:35.050687585 +0000 UTC Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.479466 4849 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 155h49m36.571224305s for next certificate rotation Dec 09 11:26:58 crc kubenswrapper[4849]: E1209 11:26:58.480047 4849 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.480219 4849 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.480229 4849 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.484058 4849 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.487567 4849 factory.go:55] Registering systemd factory Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.487599 4849 factory.go:221] Registration of the systemd container factory successfully Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.491598 4849 factory.go:153] Registering CRI-O factory Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.491738 4849 factory.go:221] Registration of the crio container factory successfully Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.491635 4849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 09 11:26:58 crc kubenswrapper[4849]: E1209 11:26:58.491860 4849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 09 11:26:58 crc kubenswrapper[4849]: E1209 11:26:58.491763 4849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.492134 4849 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.492233 4849 factory.go:103] Registering Raw factory Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.492316 4849 manager.go:1196] Started watching for new ooms in manager Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.493169 4849 manager.go:319] Starting recovery of all containers Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495230 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495299 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495312 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495325 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495336 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495347 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495358 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495368 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495581 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495593 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495605 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495650 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495664 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495681 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495710 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495722 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495732 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495743 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495757 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495769 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495780 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495792 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495806 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495820 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495831 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495843 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495857 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495868 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495881 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495896 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495908 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495919 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495931 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.495943 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.497840 4849 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.497873 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.497892 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.497905 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.497938 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.497949 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.497961 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.497973 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.497984 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.497995 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498006 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498017 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498054 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498085 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498096 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498108 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498118 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498130 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498142 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498188 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498215 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498228 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498239 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498251 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498261 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498274 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498288 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498299 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498328 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498340 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498353 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498365 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498378 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498390 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498401 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498432 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498496 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498511 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498524 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498534 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498576 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498617 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498629 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498832 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498865 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498877 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498888 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498899 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498926 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498938 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.498949 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499003 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499030 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499040 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499051 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499075 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499086 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499119 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499132 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499143 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499168 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499179 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499190 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499204 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499215 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499226 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499238 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499267 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499305 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499317 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499329 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499395 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499442 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499477 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499489 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499502 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499527 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499539 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499553 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499565 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499587 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499598 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499610 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499643 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499667 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499679 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499690 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499702 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499713 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499725 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499736 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499748 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499782 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499802 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499815 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499854 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499867 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499879 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499904 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499916 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499951 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499963 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499975 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.499986 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500025 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500037 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500049 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500062 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500215 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500231 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500243 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500280 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500304 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500316 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500328 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500372 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500401 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500609 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500629 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500642 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500653 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500664 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500675 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500686 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500719 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500730 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500743 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500755 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500808 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500840 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500852 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500864 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500900 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500911 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500921 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500932 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500944 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500956 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500966 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.500977 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501015 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501047 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501060 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501093 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501136 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501189 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501201 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501212 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501240 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501273 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501285 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501296 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501334 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501346 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501357 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501368 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501392 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501441 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501455 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501468 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501479 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501507 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501517 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501529 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501562 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501574 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501613 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501650 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501662 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501673 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501684 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501718 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501745 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501757 4849 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501767 4849 reconstruct.go:97] "Volume reconstruction finished" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.501776 4849 reconciler.go:26] "Reconciler: start to sync state" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.525683 4849 manager.go:324] Recovery completed Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.533144 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.533445 4849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.535114 4849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.535149 4849 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.535190 4849 kubelet.go:2335] "Starting kubelet main sync loop" Dec 09 11:26:58 crc kubenswrapper[4849]: E1209 11:26:58.535282 4849 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 09 11:26:58 crc kubenswrapper[4849]: W1209 11:26:58.536003 4849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 09 11:26:58 crc kubenswrapper[4849]: E1209 11:26:58.536066 4849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.537732 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.537767 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.537779 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.539195 4849 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.539212 4849 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.539232 4849 state_mem.go:36] "Initialized new in-memory state store" Dec 09 11:26:58 crc kubenswrapper[4849]: E1209 11:26:58.581049 4849 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.592555 4849 policy_none.go:49] "None policy: Start" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.593845 4849 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.593875 4849 state_mem.go:35] "Initializing new in-memory state store" Dec 09 11:26:58 crc kubenswrapper[4849]: E1209 11:26:58.635391 4849 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.650271 4849 manager.go:334] "Starting Device Plugin manager" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.650323 4849 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.650336 4849 server.go:79] "Starting device plugin registration server" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.650713 4849 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.650733 4849 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.650936 4849 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.651027 4849 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.651040 4849 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 09 11:26:58 crc kubenswrapper[4849]: E1209 11:26:58.666098 4849 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 11:26:58 crc kubenswrapper[4849]: E1209 11:26:58.693158 4849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.753078 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.755173 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.755239 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.755256 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.755299 4849 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 11:26:58 crc kubenswrapper[4849]: E1209 11:26:58.756012 4849 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.835699 4849 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.835867 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.837344 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.837427 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.837442 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.837729 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.838021 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.838057 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.839358 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.839388 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.839397 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.839489 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.839524 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.839538 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.839789 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.839972 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.840053 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.840919 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.840943 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.840975 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.841038 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.841060 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.841147 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.841180 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.841633 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.841714 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.843040 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.843083 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.843092 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.843203 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.843227 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.843236 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.843358 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.843454 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.843487 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.844182 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.844210 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.844219 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.844329 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.844350 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.844733 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.844760 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.844785 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.845201 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.845240 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.845253 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.906596 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.906722 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.906759 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.906785 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.906809 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.906832 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.906881 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.906935 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.906970 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.906998 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.907016 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.907033 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.907049 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.907066 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.907105 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.956597 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.957756 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.957789 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.957800 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:26:58 crc kubenswrapper[4849]: I1209 11:26:58.957823 4849 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 11:26:58 crc kubenswrapper[4849]: E1209 11:26:58.958195 4849 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008230 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008281 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008310 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008330 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008355 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008374 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008395 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008436 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008461 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008478 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008480 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008482 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008495 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008514 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008585 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008599 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008610 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008619 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008644 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008574 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008680 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008676 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008703 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008751 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008777 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008636 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008645 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008487 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008806 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.008691 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: E1209 11:26:59.094938 4849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.176062 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.182572 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.196977 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.219552 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.224743 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 11:26:59 crc kubenswrapper[4849]: W1209 11:26:59.255458 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-fb9da4e9dff3dd5132eea298fd8a1b915c0a531f04a19d54043fe67a7823b496 WatchSource:0}: Error finding container fb9da4e9dff3dd5132eea298fd8a1b915c0a531f04a19d54043fe67a7823b496: Status 404 returned error can't find the container with id fb9da4e9dff3dd5132eea298fd8a1b915c0a531f04a19d54043fe67a7823b496 Dec 09 11:26:59 crc kubenswrapper[4849]: W1209 11:26:59.256529 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-d945c40ca7f1c01fcc3b365bda88ba9c76d2e7911f99a02f9937c5930b428886 WatchSource:0}: Error finding container d945c40ca7f1c01fcc3b365bda88ba9c76d2e7911f99a02f9937c5930b428886: Status 404 returned error can't find the container with id d945c40ca7f1c01fcc3b365bda88ba9c76d2e7911f99a02f9937c5930b428886 Dec 09 11:26:59 crc kubenswrapper[4849]: W1209 11:26:59.263688 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6388a45b2e64913439030553d393208bbfa9fbd3eb1d66ccff22b519e0056c07 WatchSource:0}: Error finding container 6388a45b2e64913439030553d393208bbfa9fbd3eb1d66ccff22b519e0056c07: Status 404 returned error can't find the container with id 6388a45b2e64913439030553d393208bbfa9fbd3eb1d66ccff22b519e0056c07 Dec 09 11:26:59 crc kubenswrapper[4849]: W1209 11:26:59.268639 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4e39e9866c3617ba3d7dd6e991254a65a23037dcaf10979c9866d333266d0721 WatchSource:0}: Error finding container 4e39e9866c3617ba3d7dd6e991254a65a23037dcaf10979c9866d333266d0721: Status 404 returned error can't find the container with id 4e39e9866c3617ba3d7dd6e991254a65a23037dcaf10979c9866d333266d0721 Dec 09 11:26:59 crc kubenswrapper[4849]: W1209 11:26:59.279318 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-853e741bf4b53e65fb013685835ded13104d2d08958756c2ce2e2749559776d5 WatchSource:0}: Error finding container 853e741bf4b53e65fb013685835ded13104d2d08958756c2ce2e2749559776d5: Status 404 returned error can't find the container with id 853e741bf4b53e65fb013685835ded13104d2d08958756c2ce2e2749559776d5 Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.359315 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.360897 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.360948 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.360967 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.360999 4849 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 11:26:59 crc kubenswrapper[4849]: E1209 11:26:59.361641 4849 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Dec 09 11:26:59 crc kubenswrapper[4849]: W1209 11:26:59.398333 4849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 09 11:26:59 crc kubenswrapper[4849]: E1209 11:26:59.398500 4849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.473734 4849 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.541744 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"853e741bf4b53e65fb013685835ded13104d2d08958756c2ce2e2749559776d5"} Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.542623 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4e39e9866c3617ba3d7dd6e991254a65a23037dcaf10979c9866d333266d0721"} Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.543346 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6388a45b2e64913439030553d393208bbfa9fbd3eb1d66ccff22b519e0056c07"} Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.545288 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fb9da4e9dff3dd5132eea298fd8a1b915c0a531f04a19d54043fe67a7823b496"} Dec 09 11:26:59 crc kubenswrapper[4849]: I1209 11:26:59.546390 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d945c40ca7f1c01fcc3b365bda88ba9c76d2e7911f99a02f9937c5930b428886"} Dec 09 11:26:59 crc kubenswrapper[4849]: W1209 11:26:59.559458 4849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 09 11:26:59 crc kubenswrapper[4849]: E1209 11:26:59.559567 4849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 09 11:26:59 crc kubenswrapper[4849]: W1209 11:26:59.871304 4849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 09 11:26:59 crc kubenswrapper[4849]: E1209 11:26:59.871494 4849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 09 11:26:59 crc kubenswrapper[4849]: E1209 11:26:59.896144 4849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Dec 09 11:27:00 crc kubenswrapper[4849]: W1209 11:27:00.054766 4849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 09 11:27:00 crc kubenswrapper[4849]: E1209 11:27:00.054904 4849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.161962 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.164362 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.164401 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.164437 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.164467 4849 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 11:27:00 crc kubenswrapper[4849]: E1209 11:27:00.164945 4849 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.419452 4849 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 09 11:27:00 crc kubenswrapper[4849]: E1209 11:27:00.420846 4849 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.472904 4849 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.550542 4849 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239" exitCode=0 Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.550609 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239"} Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.550706 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.551823 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.551849 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.551860 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.552662 4849 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380" exitCode=0 Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.552710 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380"} Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.552785 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.553981 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.554006 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.554016 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.555014 4849 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0" exitCode=0 Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.555064 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0"} Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.555149 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.555788 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.555811 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.555822 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.555837 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.556716 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.556744 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.556755 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.557067 4849 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="90ff15c84f80699e723bb08920d3ba539111947258b61611d74c4158714af446" exitCode=0 Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.557115 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.557119 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"90ff15c84f80699e723bb08920d3ba539111947258b61611d74c4158714af446"} Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.557857 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.557874 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.557883 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.559057 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856"} Dec 09 11:27:00 crc kubenswrapper[4849]: I1209 11:27:00.559083 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d"} Dec 09 11:27:01 crc kubenswrapper[4849]: W1209 11:27:01.380987 4849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 09 11:27:01 crc kubenswrapper[4849]: E1209 11:27:01.381088 4849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.473533 4849 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 09 11:27:01 crc kubenswrapper[4849]: E1209 11:27:01.497658 4849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="3.2s" Dec 09 11:27:01 crc kubenswrapper[4849]: E1209 11:27:01.521465 4849 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f8875cc38cfad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 11:26:58.471350189 +0000 UTC m=+1.011234505,LastTimestamp:2025-12-09 11:26:58.471350189 +0000 UTC m=+1.011234505,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.565087 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9cbe5127dbb2a26b2683200bbda46e462673e98eb672667e624dc0d1f1058d7b"} Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.565208 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.566132 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.566164 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.566174 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.570621 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d"} Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.570663 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810"} Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.570719 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.571597 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.571634 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.571643 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.576039 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d6d9ddc776af8966326e6ee92251b4a127247af456fabe67cf9c86a6cc2d4454"} Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.576071 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"912e2384686e0ec62b9fa35a44eac781a123ce25d7966176317b63aef74dd153"} Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.576082 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"63eb80d6ef78c44cac4d693ead4c3ba27c4a52a859347f8a1880d460aa03a7fc"} Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.576151 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.576704 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.576722 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.576730 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.579224 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b"} Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.579246 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a"} Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.579255 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286"} Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.579264 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0"} Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.580461 4849 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143" exitCode=0 Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.580500 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143"} Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.580563 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.581040 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.581062 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.581082 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.765781 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.766873 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.766918 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.766929 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:01 crc kubenswrapper[4849]: I1209 11:27:01.766959 4849 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 11:27:01 crc kubenswrapper[4849]: E1209 11:27:01.767398 4849 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Dec 09 11:27:01 crc kubenswrapper[4849]: W1209 11:27:01.865301 4849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 09 11:27:01 crc kubenswrapper[4849]: E1209 11:27:01.865391 4849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.357017 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.585844 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.585791 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aa7d7c03dadfe2511eb4d748fd301cfa01cd417802e55ed01350084935c87138"} Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.587442 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.587571 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.587675 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.588344 4849 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9" exitCode=0 Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.588457 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9"} Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.588599 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.588649 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.588615 4849 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.588804 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.589673 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.590145 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.590166 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.590178 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.590225 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.590249 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.590257 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.590804 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.590822 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.590833 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.591185 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.591222 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:02 crc kubenswrapper[4849]: I1209 11:27:02.591233 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.072098 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.596919 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c"} Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.596953 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.596972 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392"} Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.596989 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1"} Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.597002 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8"} Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.597015 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e"} Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.597086 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.597175 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.600086 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.600147 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.600185 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.600934 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.600994 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.601032 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.600934 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.601073 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:03 crc kubenswrapper[4849]: I1209 11:27:03.601124 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.497140 4849 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.599947 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.599949 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.601567 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.601578 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.601602 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.601609 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.601614 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.601630 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.911610 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.911825 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.913171 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.913255 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.913274 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.967880 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.969353 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.969487 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.969505 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:04 crc kubenswrapper[4849]: I1209 11:27:04.969551 4849 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 11:27:05 crc kubenswrapper[4849]: I1209 11:27:05.439422 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:27:05 crc kubenswrapper[4849]: I1209 11:27:05.439592 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:05 crc kubenswrapper[4849]: I1209 11:27:05.440750 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:05 crc kubenswrapper[4849]: I1209 11:27:05.440798 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:05 crc kubenswrapper[4849]: I1209 11:27:05.440812 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.354433 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.354651 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.355883 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.355952 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.355966 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.395490 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.395648 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.396722 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.396756 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.396767 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.409302 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.409425 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.410163 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.410189 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.410196 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.413703 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.605589 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.605711 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.607050 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.607097 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.607109 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.889189 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.889401 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.890963 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.891007 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:06 crc kubenswrapper[4849]: I1209 11:27:06.891025 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:07 crc kubenswrapper[4849]: I1209 11:27:07.607681 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:07 crc kubenswrapper[4849]: I1209 11:27:07.608479 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:07 crc kubenswrapper[4849]: I1209 11:27:07.608506 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:07 crc kubenswrapper[4849]: I1209 11:27:07.608515 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:08 crc kubenswrapper[4849]: I1209 11:27:08.440118 4849 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 11:27:08 crc kubenswrapper[4849]: I1209 11:27:08.440195 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 11:27:08 crc kubenswrapper[4849]: E1209 11:27:08.666341 4849 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 11:27:09 crc kubenswrapper[4849]: I1209 11:27:09.213100 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 09 11:27:09 crc kubenswrapper[4849]: I1209 11:27:09.213646 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:09 crc kubenswrapper[4849]: I1209 11:27:09.219065 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:09 crc kubenswrapper[4849]: I1209 11:27:09.219301 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:09 crc kubenswrapper[4849]: I1209 11:27:09.219513 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:10 crc kubenswrapper[4849]: I1209 11:27:10.118914 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:27:10 crc kubenswrapper[4849]: I1209 11:27:10.119950 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:10 crc kubenswrapper[4849]: I1209 11:27:10.121562 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:10 crc kubenswrapper[4849]: I1209 11:27:10.121590 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:10 crc kubenswrapper[4849]: I1209 11:27:10.121601 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:12 crc kubenswrapper[4849]: I1209 11:27:12.474082 4849 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 09 11:27:12 crc kubenswrapper[4849]: I1209 11:27:12.676124 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 09 11:27:12 crc kubenswrapper[4849]: I1209 11:27:12.677381 4849 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aa7d7c03dadfe2511eb4d748fd301cfa01cd417802e55ed01350084935c87138" exitCode=255 Dec 09 11:27:12 crc kubenswrapper[4849]: I1209 11:27:12.677437 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"aa7d7c03dadfe2511eb4d748fd301cfa01cd417802e55ed01350084935c87138"} Dec 09 11:27:12 crc kubenswrapper[4849]: I1209 11:27:12.677584 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:12 crc kubenswrapper[4849]: I1209 11:27:12.678202 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:12 crc kubenswrapper[4849]: I1209 11:27:12.678225 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:12 crc kubenswrapper[4849]: I1209 11:27:12.678234 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:12 crc kubenswrapper[4849]: I1209 11:27:12.678711 4849 scope.go:117] "RemoveContainer" containerID="aa7d7c03dadfe2511eb4d748fd301cfa01cd417802e55ed01350084935c87138" Dec 09 11:27:12 crc kubenswrapper[4849]: W1209 11:27:12.680688 4849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 09 11:27:12 crc kubenswrapper[4849]: I1209 11:27:12.680760 4849 trace.go:236] Trace[970320825]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 11:27:02.669) (total time: 10011ms): Dec 09 11:27:12 crc kubenswrapper[4849]: Trace[970320825]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10011ms (11:27:12.680) Dec 09 11:27:12 crc kubenswrapper[4849]: Trace[970320825]: [10.011469423s] [10.011469423s] END Dec 09 11:27:12 crc kubenswrapper[4849]: E1209 11:27:12.680775 4849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 09 11:27:12 crc kubenswrapper[4849]: W1209 11:27:12.832340 4849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 09 11:27:12 crc kubenswrapper[4849]: I1209 11:27:12.832694 4849 trace.go:236] Trace[1650779204]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 11:27:02.830) (total time: 10001ms): Dec 09 11:27:12 crc kubenswrapper[4849]: Trace[1650779204]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:27:12.832) Dec 09 11:27:12 crc kubenswrapper[4849]: Trace[1650779204]: [10.001788952s] [10.001788952s] END Dec 09 11:27:12 crc kubenswrapper[4849]: E1209 11:27:12.832716 4849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 09 11:27:12 crc kubenswrapper[4849]: I1209 11:27:12.989121 4849 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 11:27:12 crc kubenswrapper[4849]: I1209 11:27:12.989183 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 11:27:13 crc kubenswrapper[4849]: I1209 11:27:13.001389 4849 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 11:27:13 crc kubenswrapper[4849]: I1209 11:27:13.001470 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 11:27:13 crc kubenswrapper[4849]: I1209 11:27:13.403807 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:27:13 crc kubenswrapper[4849]: I1209 11:27:13.682672 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 09 11:27:13 crc kubenswrapper[4849]: I1209 11:27:13.685027 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b"} Dec 09 11:27:13 crc kubenswrapper[4849]: I1209 11:27:13.685147 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:13 crc kubenswrapper[4849]: I1209 11:27:13.686106 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:13 crc kubenswrapper[4849]: I1209 11:27:13.686208 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:13 crc kubenswrapper[4849]: I1209 11:27:13.686279 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:14 crc kubenswrapper[4849]: I1209 11:27:14.687385 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:14 crc kubenswrapper[4849]: I1209 11:27:14.687525 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:27:14 crc kubenswrapper[4849]: I1209 11:27:14.692393 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:14 crc kubenswrapper[4849]: I1209 11:27:14.692446 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:14 crc kubenswrapper[4849]: I1209 11:27:14.692461 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:15 crc kubenswrapper[4849]: I1209 11:27:15.690106 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:15 crc kubenswrapper[4849]: I1209 11:27:15.691097 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:15 crc kubenswrapper[4849]: I1209 11:27:15.691186 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:15 crc kubenswrapper[4849]: I1209 11:27:15.691205 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:16 crc kubenswrapper[4849]: I1209 11:27:16.203507 4849 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 09 11:27:16 crc kubenswrapper[4849]: I1209 11:27:16.359211 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:27:16 crc kubenswrapper[4849]: I1209 11:27:16.692673 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:16 crc kubenswrapper[4849]: I1209 11:27:16.694570 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:16 crc kubenswrapper[4849]: I1209 11:27:16.694609 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:16 crc kubenswrapper[4849]: I1209 11:27:16.694620 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:16 crc kubenswrapper[4849]: I1209 11:27:16.697543 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.209387 4849 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.695143 4849 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.695998 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.696031 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.696039 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.962361 4849 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 09 11:27:17 crc kubenswrapper[4849]: E1209 11:27:17.965702 4849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.969375 4849 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.970665 4849 trace.go:236] Trace[1841872332]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 11:27:05.101) (total time: 12869ms): Dec 09 11:27:17 crc kubenswrapper[4849]: Trace[1841872332]: ---"Objects listed" error: 12869ms (11:27:17.970) Dec 09 11:27:17 crc kubenswrapper[4849]: Trace[1841872332]: [12.869161579s] [12.869161579s] END Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.970864 4849 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.977008 4849 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.977286 4849 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.982886 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.983113 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.983178 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.983268 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:17 crc kubenswrapper[4849]: I1209 11:27:17.983335 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:17Z","lastTransitionTime":"2025-12-09T11:27:17Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.004173 4849 csr.go:261] certificate signing request csr-42vzl is approved, waiting to be issued Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.004205 4849 csr.go:257] certificate signing request csr-42vzl is issued Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.005649 4849 trace.go:236] Trace[1529523967]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 11:27:05.723) (total time: 12282ms): Dec 09 11:27:18 crc kubenswrapper[4849]: Trace[1529523967]: ---"Objects listed" error: 12281ms (11:27:18.005) Dec 09 11:27:18 crc kubenswrapper[4849]: Trace[1529523967]: [12.282117178s] [12.282117178s] END Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.005688 4849 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.025180 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:17Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.029263 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.029293 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.029303 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.029320 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.029331 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:18Z","lastTransitionTime":"2025-12-09T11:27:18Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.041904 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.044670 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.044706 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.044718 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.044737 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.044747 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:18Z","lastTransitionTime":"2025-12-09T11:27:18Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.055718 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.058689 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.058724 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.058733 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.058749 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.058758 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:18Z","lastTransitionTime":"2025-12-09T11:27:18Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.081069 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.084539 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.084581 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.084594 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.084613 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.084624 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:18Z","lastTransitionTime":"2025-12-09T11:27:18Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.092824 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.093008 4849 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.098772 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.098799 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.098807 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.098823 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.098847 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:18Z","lastTransitionTime":"2025-12-09T11:27:18Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.201177 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.201211 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.201222 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.201240 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.201251 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:18Z","lastTransitionTime":"2025-12-09T11:27:18Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.303228 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.303274 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.303283 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.303301 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.303310 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:18Z","lastTransitionTime":"2025-12-09T11:27:18Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.383243 4849 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 09 11:27:18 crc kubenswrapper[4849]: W1209 11:27:18.383467 4849 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 09 11:27:18 crc kubenswrapper[4849]: W1209 11:27:18.383504 4849 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.383566 4849 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.177:54558->38.102.83.177:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f887648fffb67 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 11:27:00.564777831 +0000 UTC m=+3.104662147,LastTimestamp:2025-12-09 11:27:00.564777831 +0000 UTC m=+3.104662147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.405860 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.405905 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.405916 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.405938 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.405951 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:18Z","lastTransitionTime":"2025-12-09T11:27:18Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.440504 4849 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.440823 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.479551 4849 apiserver.go:52] "Watching apiserver" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.486330 4849 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.486657 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-lpj4f","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.487244 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.487282 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lpj4f" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.487351 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.487448 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.487579 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.487742 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.487287 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.487784 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.487757 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.488022 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.490121 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.490190 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.490863 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.492370 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.492647 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.492806 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.492965 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.493094 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.493225 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.494554 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.494750 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.495230 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.508572 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.508603 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.508613 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.508629 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.508647 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:18Z","lastTransitionTime":"2025-12-09T11:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.526694 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.535714 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.549121 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.559185 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.569389 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.578065 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.584262 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.584994 4849 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.592198 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.601588 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.609159 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.612839 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.612873 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.612882 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.612896 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.612909 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:18Z","lastTransitionTime":"2025-12-09T11:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.623345 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.629166 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.638770 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.646083 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.653671 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.665881 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.674901 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.674943 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.675078 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.675224 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.675256 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.675379 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.675447 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.675472 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.675493 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.675639 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.675666 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676045 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676073 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676131 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676155 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676201 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676223 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676248 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676291 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676313 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676430 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676456 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676597 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676626 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676650 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.675940 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676191 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676233 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676272 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676616 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676625 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676675 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676813 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676831 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676698 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.676977 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677008 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677035 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677059 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677083 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677106 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677128 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677149 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677201 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677225 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677249 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677271 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677296 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677318 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677340 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677362 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677386 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677461 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677488 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677513 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677535 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677571 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677593 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677615 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677636 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677662 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677706 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677733 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677758 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677783 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677809 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677839 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677862 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677888 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677925 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677951 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677976 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678002 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678027 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678052 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678091 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678116 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678138 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678161 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678185 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678208 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678234 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678260 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678283 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678325 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678348 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678394 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678434 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678457 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678483 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678506 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677030 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677224 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677236 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677431 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677750 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.677997 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678188 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678424 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.678519 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:27:19.178502436 +0000 UTC m=+21.718386752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.680699 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.680788 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.680825 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.680884 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.680953 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.680989 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.681050 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.681080 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.681148 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.681180 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.681335 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.682106 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.682255 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.682275 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.679367 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.679390 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.680053 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.682842 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.683073 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.683089 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.683256 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.683451 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.683470 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.683486 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.683532 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.683556 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.683638 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.683664 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.683804 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.683832 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.684168 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.684493 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689260 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689326 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689353 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689376 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689402 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689452 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689478 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689500 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689526 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689542 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689559 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689575 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689590 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689609 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689626 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689642 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689664 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689685 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689702 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689720 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689737 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689753 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689770 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689787 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689803 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689820 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689835 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689851 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689868 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689882 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689898 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689913 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689931 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689950 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689967 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689983 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.689999 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690018 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690038 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690054 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690074 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690097 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690121 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690143 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690162 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690176 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690194 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690215 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690242 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690263 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690286 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690308 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690333 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.690382 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.700364 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.700958 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.701384 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.701503 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.701753 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.702148 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.702159 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.702524 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.702624 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.702739 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.702754 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.702805 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.702926 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.702931 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.703218 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.703249 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.703555 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.678712 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.703819 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.703844 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.704066 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.704123 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.704143 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.704335 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.704434 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.704636 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.704682 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.704683 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.704919 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.704921 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.705167 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.705178 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.705433 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.705489 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.705561 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.705771 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.705927 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.706019 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.706145 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.706487 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.706722 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.706930 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.707349 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.707367 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.707807 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.707936 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.708134 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.708270 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.708348 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.708537 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.708600 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.708788 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.708807 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.709273 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.709740 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.710121 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.710478 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.710582 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.710729 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.710802 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.710852 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.711646 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.711714 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.711735 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.711774 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.711794 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.711811 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.711849 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.711867 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.711884 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.711901 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.711931 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.711938 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712011 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712040 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712067 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712092 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712118 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712143 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712492 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712519 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712542 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712571 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712595 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712620 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712646 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712670 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712693 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712718 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712741 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712764 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712791 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712814 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712840 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712865 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712887 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712912 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712914 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712936 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712965 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.712990 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713015 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713039 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713063 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713137 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713145 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713166 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713193 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713221 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fh69\" (UniqueName: \"kubernetes.io/projected/7d4c399a-d447-4219-9a6f-dcfcb77c7a5c-kube-api-access-7fh69\") pod \"node-resolver-lpj4f\" (UID: \"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\") " pod="openshift-dns/node-resolver-lpj4f" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713248 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713272 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713299 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713325 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713348 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713356 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713371 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713398 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713439 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713466 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713493 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713521 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713545 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7d4c399a-d447-4219-9a6f-dcfcb77c7a5c-hosts-file\") pod \"node-resolver-lpj4f\" (UID: \"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\") " pod="openshift-dns/node-resolver-lpj4f" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713629 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713645 4849 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713661 4849 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713674 4849 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713686 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713699 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713711 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713725 4849 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713739 4849 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713751 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713764 4849 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713776 4849 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713790 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713803 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713816 4849 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713827 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713839 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713852 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713865 4849 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713877 4849 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713889 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713903 4849 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713917 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713930 4849 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713943 4849 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713956 4849 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713968 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713981 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.713994 4849 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714008 4849 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714021 4849 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714034 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714047 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714059 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714071 4849 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714085 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714100 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714147 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714125 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714161 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714195 4849 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714209 4849 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714241 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714256 4849 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714267 4849 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714279 4849 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714289 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714316 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714327 4849 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714366 4849 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714376 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714460 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714472 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714481 4849 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714492 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714501 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714511 4849 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714543 4849 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714553 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714563 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714572 4849 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714582 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714592 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714625 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714634 4849 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714643 4849 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714652 4849 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714662 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714671 4849 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714736 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714747 4849 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714853 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714863 4849 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714872 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714880 4849 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714889 4849 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714898 4849 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714909 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714918 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714929 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714938 4849 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714947 4849 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714956 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714965 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714979 4849 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714988 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714996 4849 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.715007 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.715016 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.715024 4849 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.715033 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.715043 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.715052 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.715062 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.715073 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.715083 4849 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.714519 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.715114 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.715515 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.715797 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.716735 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.716854 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.717045 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.717205 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.717577 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.717643 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.718120 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.718515 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.718560 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.719024 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.719525 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.719569 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.720188 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.720634 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.720753 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.721084 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.721126 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.721138 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.721150 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.721149 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.721167 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.721206 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:18Z","lastTransitionTime":"2025-12-09T11:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.721707 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.724013 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.724650 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.725091 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.738207 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.738373 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.738755 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.739244 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.739770 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.740101 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.740239 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.740281 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.740478 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.740619 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.740954 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.741011 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.741062 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.741171 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.741252 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.741255 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.741472 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.741538 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.741584 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.741819 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.741883 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.741969 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.742138 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.742699 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.743007 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.743282 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.743300 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.743718 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.743920 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.744209 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.744262 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.744312 4849 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.744363 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:19.244347128 +0000 UTC m=+21.784231444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.744453 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.744562 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.744605 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.744618 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.744809 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.745011 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.745093 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.745188 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.745209 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.745276 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.745303 4849 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.745520 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.745542 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.745666 4849 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.745710 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:19.2457015 +0000 UTC m=+21.785585816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.745772 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.745843 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.745986 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.745991 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.746258 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.746507 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.746557 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.745718 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.746817 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.746865 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.747168 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.747732 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.747846 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.747969 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.748172 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.748334 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.748605 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.748825 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.749204 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.749463 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.749652 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.749808 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.750708 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.751205 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.751284 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.751554 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.751583 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.752622 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.754502 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.761313 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.761879 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.764079 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.764112 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.764126 4849 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.764191 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:19.264170961 +0000 UTC m=+21.804055317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.767512 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.767654 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.767665 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.767822 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.767878 4849 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:18 crc kubenswrapper[4849]: E1209 11:27:18.767982 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:19.267949931 +0000 UTC m=+21.807834247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.770297 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.776226 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.808710 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816491 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fh69\" (UniqueName: \"kubernetes.io/projected/7d4c399a-d447-4219-9a6f-dcfcb77c7a5c-kube-api-access-7fh69\") pod \"node-resolver-lpj4f\" (UID: \"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\") " pod="openshift-dns/node-resolver-lpj4f" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816541 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816581 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816625 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7d4c399a-d447-4219-9a6f-dcfcb77c7a5c-hosts-file\") pod \"node-resolver-lpj4f\" (UID: \"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\") " pod="openshift-dns/node-resolver-lpj4f" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816668 4849 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816681 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816692 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816702 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816713 4849 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816723 4849 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816734 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816744 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816756 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816767 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816778 4849 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816789 4849 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816798 4849 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816809 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816822 4849 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816834 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816845 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816856 4849 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816867 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816877 4849 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816888 4849 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816897 4849 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816907 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816917 4849 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816928 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816938 4849 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816950 4849 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816960 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816971 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816982 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.816992 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817006 4849 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817016 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817027 4849 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817037 4849 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817047 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817058 4849 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817067 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817077 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817088 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817098 4849 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817108 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817118 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817128 4849 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817138 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817148 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817158 4849 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817168 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817191 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817209 4849 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817219 4849 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817229 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817239 4849 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817262 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817285 4849 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817295 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817320 4849 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817320 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817331 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817373 4849 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817387 4849 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817399 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817436 4849 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817449 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817459 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817469 4849 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817475 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7d4c399a-d447-4219-9a6f-dcfcb77c7a5c-hosts-file\") pod \"node-resolver-lpj4f\" (UID: \"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\") " pod="openshift-dns/node-resolver-lpj4f" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817480 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817503 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817508 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817516 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817542 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817553 4849 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817564 4849 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817574 4849 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817585 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817597 4849 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817607 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817617 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817628 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817638 4849 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817648 4849 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817658 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817668 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817678 4849 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817688 4849 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817698 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817708 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817718 4849 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817727 4849 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817738 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817748 4849 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817762 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817772 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817784 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817794 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817805 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817815 4849 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817825 4849 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817834 4849 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817845 4849 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.817855 4849 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.819594 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.865144 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.882957 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.898813 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fh69\" (UniqueName: \"kubernetes.io/projected/7d4c399a-d447-4219-9a6f-dcfcb77c7a5c-kube-api-access-7fh69\") pod \"node-resolver-lpj4f\" (UID: \"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\") " pod="openshift-dns/node-resolver-lpj4f" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.937647 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.937700 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.937711 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.937726 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:18 crc kubenswrapper[4849]: I1209 11:27:18.937752 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:18Z","lastTransitionTime":"2025-12-09T11:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.006318 4849 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-09 11:22:17 +0000 UTC, rotation deadline is 2026-09-25 12:09:26.348965172 +0000 UTC Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.006391 4849 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6960h42m7.342577021s for next certificate rotation Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.040148 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.040180 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.040199 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.040214 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.040225 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:19Z","lastTransitionTime":"2025-12-09T11:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.100820 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.107934 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lpj4f" Dec 09 11:27:19 crc kubenswrapper[4849]: W1209 11:27:19.112116 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-0b485fbe8159b778a7311e25aaf47a9eaf40976ea05f0e1b41d9060a9b13ba15 WatchSource:0}: Error finding container 0b485fbe8159b778a7311e25aaf47a9eaf40976ea05f0e1b41d9060a9b13ba15: Status 404 returned error can't find the container with id 0b485fbe8159b778a7311e25aaf47a9eaf40976ea05f0e1b41d9060a9b13ba15 Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.127937 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 11:27:19 crc kubenswrapper[4849]: W1209 11:27:19.131797 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d4c399a_d447_4219_9a6f_dcfcb77c7a5c.slice/crio-63a5b511819c3f1112219fc33cc5ffbea86acfbf123992932d984a62929b327a WatchSource:0}: Error finding container 63a5b511819c3f1112219fc33cc5ffbea86acfbf123992932d984a62929b327a: Status 404 returned error can't find the container with id 63a5b511819c3f1112219fc33cc5ffbea86acfbf123992932d984a62929b327a Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.142078 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.142113 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.142122 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.142166 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.142175 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:19Z","lastTransitionTime":"2025-12-09T11:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.220431 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:27:19 crc kubenswrapper[4849]: E1209 11:27:19.220623 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:27:20.220603754 +0000 UTC m=+22.760488070 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:27:19 crc kubenswrapper[4849]: W1209 11:27:19.224578 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b075e70e150d2ac03e295e2ed8f8422f087b7f4876e99ffd314bf57867125687 WatchSource:0}: Error finding container b075e70e150d2ac03e295e2ed8f8422f087b7f4876e99ffd314bf57867125687: Status 404 returned error can't find the container with id b075e70e150d2ac03e295e2ed8f8422f087b7f4876e99ffd314bf57867125687 Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.244038 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.244070 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.244078 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.244092 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.244101 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:19Z","lastTransitionTime":"2025-12-09T11:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.259430 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.271278 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.281197 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.283521 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.290878 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.321341 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.321402 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.321441 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.321462 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:19 crc kubenswrapper[4849]: E1209 11:27:19.321540 4849 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:27:19 crc kubenswrapper[4849]: E1209 11:27:19.321592 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:20.321576513 +0000 UTC m=+22.861460829 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:27:19 crc kubenswrapper[4849]: E1209 11:27:19.321675 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:27:19 crc kubenswrapper[4849]: E1209 11:27:19.321689 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:27:19 crc kubenswrapper[4849]: E1209 11:27:19.321701 4849 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:19 crc kubenswrapper[4849]: E1209 11:27:19.321727 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:20.321719488 +0000 UTC m=+22.861603804 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:19 crc kubenswrapper[4849]: E1209 11:27:19.321774 4849 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:27:19 crc kubenswrapper[4849]: E1209 11:27:19.321806 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:20.321790959 +0000 UTC m=+22.861675275 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:27:19 crc kubenswrapper[4849]: E1209 11:27:19.321866 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:27:19 crc kubenswrapper[4849]: E1209 11:27:19.321883 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:27:19 crc kubenswrapper[4849]: E1209 11:27:19.321893 4849 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:19 crc kubenswrapper[4849]: E1209 11:27:19.321928 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:20.321916822 +0000 UTC m=+22.861801148 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.344828 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.353676 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.353723 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.353735 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.353754 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.353766 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:19Z","lastTransitionTime":"2025-12-09T11:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.356179 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-h76bl"] Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.356552 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.359584 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.359828 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.359954 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.360115 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.360242 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.364581 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.404191 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.405046 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429342 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-os-release\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429398 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-var-lib-kubelet\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429438 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-multus-conf-dir\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429460 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-run-multus-certs\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429482 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-cnibin\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429501 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e5c6e29f-6131-4daa-b297-81eb53e7384c-multus-daemon-config\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429518 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-etc-kubernetes\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429563 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-hostroot\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429597 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-system-cni-dir\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429615 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-multus-socket-dir-parent\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429638 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-run-netns\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429659 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-run-k8s-cni-cncf-io\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429678 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-var-lib-cni-multus\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429697 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfnlw\" (UniqueName: \"kubernetes.io/projected/e5c6e29f-6131-4daa-b297-81eb53e7384c-kube-api-access-zfnlw\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429728 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-var-lib-cni-bin\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429749 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-multus-cni-dir\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.429767 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e5c6e29f-6131-4daa-b297-81eb53e7384c-cni-binary-copy\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.495884 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.509758 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.509812 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.509824 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.509844 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.509857 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:19Z","lastTransitionTime":"2025-12-09T11:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530521 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-hostroot\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530564 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-system-cni-dir\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530581 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-multus-socket-dir-parent\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530596 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-run-netns\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530611 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-run-k8s-cni-cncf-io\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530629 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-var-lib-cni-multus\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530643 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfnlw\" (UniqueName: \"kubernetes.io/projected/e5c6e29f-6131-4daa-b297-81eb53e7384c-kube-api-access-zfnlw\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530659 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e5c6e29f-6131-4daa-b297-81eb53e7384c-cni-binary-copy\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530672 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-var-lib-cni-bin\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530688 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-multus-cni-dir\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530702 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-os-release\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530716 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-var-lib-kubelet\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530731 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-multus-conf-dir\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530768 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-run-multus-certs\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530782 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-cnibin\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530796 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e5c6e29f-6131-4daa-b297-81eb53e7384c-multus-daemon-config\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530797 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-run-k8s-cni-cncf-io\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530848 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-etc-kubernetes\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530812 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-etc-kubernetes\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530892 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-hostroot\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530910 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-var-lib-cni-multus\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.530975 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-system-cni-dir\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.531136 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-multus-socket-dir-parent\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.531170 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-run-netns\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.531198 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-var-lib-kubelet\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.531226 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-var-lib-cni-bin\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.531359 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-multus-cni-dir\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.531459 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-os-release\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.531501 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-host-run-multus-certs\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.531539 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-multus-conf-dir\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.531590 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e5c6e29f-6131-4daa-b297-81eb53e7384c-cnibin\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.531733 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e5c6e29f-6131-4daa-b297-81eb53e7384c-cni-binary-copy\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.532441 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e5c6e29f-6131-4daa-b297-81eb53e7384c-multus-daemon-config\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.605188 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.613186 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.613217 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.613226 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.613240 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.613249 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:19Z","lastTransitionTime":"2025-12-09T11:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.616934 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfnlw\" (UniqueName: \"kubernetes.io/projected/e5c6e29f-6131-4daa-b297-81eb53e7384c-kube-api-access-zfnlw\") pod \"multus-h76bl\" (UID: \"e5c6e29f-6131-4daa-b297-81eb53e7384c\") " pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.670872 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h76bl" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.720822 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.720862 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.720875 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.720894 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.720906 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:19Z","lastTransitionTime":"2025-12-09T11:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.816585 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.817115 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.818299 4849 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b" exitCode=255 Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.818433 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.818552 4849 scope.go:117] "RemoveContainer" containerID="aa7d7c03dadfe2511eb4d748fd301cfa01cd417802e55ed01350084935c87138" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.832024 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.832218 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.832276 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.832334 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.832387 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:19Z","lastTransitionTime":"2025-12-09T11:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.832617 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lpj4f" event={"ID":"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c","Type":"ContainerStarted","Data":"94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.832708 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lpj4f" event={"ID":"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c","Type":"ContainerStarted","Data":"63a5b511819c3f1112219fc33cc5ffbea86acfbf123992932d984a62929b327a"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.833942 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.834953 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.835055 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0b485fbe8159b778a7311e25aaf47a9eaf40976ea05f0e1b41d9060a9b13ba15"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.837562 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h76bl" event={"ID":"e5c6e29f-6131-4daa-b297-81eb53e7384c","Type":"ContainerStarted","Data":"79529623dda1581a99f1111d41d29818fb24f77b3608ec19b07dd895b52ed374"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.838347 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b075e70e150d2ac03e295e2ed8f8422f087b7f4876e99ffd314bf57867125687"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.840602 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.840701 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.840772 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2a55829acc769d675da835c1a2d73aaa81abce470cd8991e06974f6bc12b3e44"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.887348 4849 scope.go:117] "RemoveContainer" containerID="8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b" Dec 09 11:27:19 crc kubenswrapper[4849]: E1209 11:27:19.887535 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.901641 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.935244 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.935279 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.935291 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.935306 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.935320 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:19Z","lastTransitionTime":"2025-12-09T11:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.991396 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.993845 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-89kpx"] Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.994230 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:27:19 crc kubenswrapper[4849]: I1209 11:27:19.998015 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6hf97"] Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:19.998654 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:19.999498 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:19.999984 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-lwsgz"] Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.000703 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.004534 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.004968 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 09 11:27:20 crc kubenswrapper[4849]: W1209 11:27:20.010249 4849 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.010285 4849 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 09 11:27:20 crc kubenswrapper[4849]: W1209 11:27:20.010691 4849 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.010706 4849 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.014329 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.014769 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.016876 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.017422 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.021807 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.024325 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.032323 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.042761 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.042803 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.042814 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.042831 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.042842 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:20Z","lastTransitionTime":"2025-12-09T11:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.055253 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.058558 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.059256 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.073115 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088146 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67zr8\" (UniqueName: \"kubernetes.io/projected/157c6f6c-042b-4da3-934e-a08474e56486-kube-api-access-67zr8\") pod \"machine-config-daemon-89kpx\" (UID: \"157c6f6c-042b-4da3-934e-a08474e56486\") " pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088181 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovnkube-script-lib\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088200 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/de61302b-e1bc-4372-8485-36b4fde18e80-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088248 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/157c6f6c-042b-4da3-934e-a08474e56486-proxy-tls\") pod \"machine-config-daemon-89kpx\" (UID: \"157c6f6c-042b-4da3-934e-a08474e56486\") " pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088265 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088285 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-etc-openvswitch\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088300 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/de61302b-e1bc-4372-8485-36b4fde18e80-cni-binary-copy\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088318 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-systemd-units\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088336 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-ovn\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088351 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-node-log\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088366 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-cni-netd\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088393 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-cni-bin\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088427 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/157c6f6c-042b-4da3-934e-a08474e56486-rootfs\") pod \"machine-config-daemon-89kpx\" (UID: \"157c6f6c-042b-4da3-934e-a08474e56486\") " pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088449 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-run-netns\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088470 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grflc\" (UniqueName: \"kubernetes.io/projected/de61302b-e1bc-4372-8485-36b4fde18e80-kube-api-access-grflc\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088488 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/de61302b-e1bc-4372-8485-36b4fde18e80-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088505 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-systemd\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088520 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-openvswitch\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088536 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jm22\" (UniqueName: \"kubernetes.io/projected/205e41c5-82b8-4bac-a27a-49f1e0da94e5-kube-api-access-5jm22\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088554 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-slash\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088569 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-var-lib-openvswitch\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088585 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-kubelet\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088602 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088617 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovnkube-config\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088649 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de61302b-e1bc-4372-8485-36b4fde18e80-system-cni-dir\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088668 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/de61302b-e1bc-4372-8485-36b4fde18e80-os-release\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088689 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-log-socket\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088706 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovn-node-metrics-cert\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088736 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/157c6f6c-042b-4da3-934e-a08474e56486-mcd-auth-proxy-config\") pod \"machine-config-daemon-89kpx\" (UID: \"157c6f6c-042b-4da3-934e-a08474e56486\") " pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088752 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-env-overrides\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.088768 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/de61302b-e1bc-4372-8485-36b4fde18e80-cnibin\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.150214 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.150249 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.150257 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.150275 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.150285 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:20Z","lastTransitionTime":"2025-12-09T11:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192373 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-cni-netd\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192424 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-cni-bin\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192447 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/157c6f6c-042b-4da3-934e-a08474e56486-rootfs\") pod \"machine-config-daemon-89kpx\" (UID: \"157c6f6c-042b-4da3-934e-a08474e56486\") " pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192465 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-run-netns\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192481 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grflc\" (UniqueName: \"kubernetes.io/projected/de61302b-e1bc-4372-8485-36b4fde18e80-kube-api-access-grflc\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192510 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/de61302b-e1bc-4372-8485-36b4fde18e80-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192528 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-openvswitch\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192564 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jm22\" (UniqueName: \"kubernetes.io/projected/205e41c5-82b8-4bac-a27a-49f1e0da94e5-kube-api-access-5jm22\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192581 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-systemd\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192595 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-slash\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192613 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-var-lib-openvswitch\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192630 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-kubelet\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192643 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192686 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovnkube-config\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192702 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de61302b-e1bc-4372-8485-36b4fde18e80-system-cni-dir\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192739 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/de61302b-e1bc-4372-8485-36b4fde18e80-os-release\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192784 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-log-socket\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192803 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovn-node-metrics-cert\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192820 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/157c6f6c-042b-4da3-934e-a08474e56486-mcd-auth-proxy-config\") pod \"machine-config-daemon-89kpx\" (UID: \"157c6f6c-042b-4da3-934e-a08474e56486\") " pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192840 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-env-overrides\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192866 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/de61302b-e1bc-4372-8485-36b4fde18e80-cnibin\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192891 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/de61302b-e1bc-4372-8485-36b4fde18e80-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192915 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67zr8\" (UniqueName: \"kubernetes.io/projected/157c6f6c-042b-4da3-934e-a08474e56486-kube-api-access-67zr8\") pod \"machine-config-daemon-89kpx\" (UID: \"157c6f6c-042b-4da3-934e-a08474e56486\") " pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192953 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovnkube-script-lib\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192971 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.192988 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/157c6f6c-042b-4da3-934e-a08474e56486-proxy-tls\") pod \"machine-config-daemon-89kpx\" (UID: \"157c6f6c-042b-4da3-934e-a08474e56486\") " pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193001 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-etc-openvswitch\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193014 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/de61302b-e1bc-4372-8485-36b4fde18e80-cni-binary-copy\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193032 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-ovn\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193046 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-node-log\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193059 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-systemd-units\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193126 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-systemd-units\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193167 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-cni-netd\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193187 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-cni-bin\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193218 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/157c6f6c-042b-4da3-934e-a08474e56486-rootfs\") pod \"machine-config-daemon-89kpx\" (UID: \"157c6f6c-042b-4da3-934e-a08474e56486\") " pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193237 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-run-netns\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193570 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/de61302b-e1bc-4372-8485-36b4fde18e80-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193598 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-openvswitch\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193750 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-systemd\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193774 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-slash\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193797 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-var-lib-openvswitch\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193822 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-kubelet\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.193849 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.194397 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovnkube-config\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.194449 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de61302b-e1bc-4372-8485-36b4fde18e80-system-cni-dir\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.194490 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/de61302b-e1bc-4372-8485-36b4fde18e80-os-release\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.194509 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-log-socket\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.199344 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovn-node-metrics-cert\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.199483 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.199586 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovnkube-script-lib\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.200201 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/157c6f6c-042b-4da3-934e-a08474e56486-mcd-auth-proxy-config\") pod \"machine-config-daemon-89kpx\" (UID: \"157c6f6c-042b-4da3-934e-a08474e56486\") " pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.200665 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-env-overrides\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.200727 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/de61302b-e1bc-4372-8485-36b4fde18e80-cnibin\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.202443 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/de61302b-e1bc-4372-8485-36b4fde18e80-cni-binary-copy\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.202492 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-etc-openvswitch\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.202520 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-ovn\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.202576 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-node-log\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.308340 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/157c6f6c-042b-4da3-934e-a08474e56486-proxy-tls\") pod \"machine-config-daemon-89kpx\" (UID: \"157c6f6c-042b-4da3-934e-a08474e56486\") " pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.309001 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.309273 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:27:22.309252905 +0000 UTC m=+24.849137221 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.316079 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.316116 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.316125 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.316140 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.316150 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:20Z","lastTransitionTime":"2025-12-09T11:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.390587 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.410155 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.410202 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.410226 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.410286 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.410429 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.410445 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.410454 4849 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.410495 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:22.410482721 +0000 UTC m=+24.950367027 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.410540 4849 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.410560 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:22.410554113 +0000 UTC m=+24.950438429 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.410584 4849 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.410601 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:22.410595974 +0000 UTC m=+24.950480290 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.410639 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.410647 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.410653 4849 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.410674 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:22.410668055 +0000 UTC m=+24.950552371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.411719 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67zr8\" (UniqueName: \"kubernetes.io/projected/157c6f6c-042b-4da3-934e-a08474e56486-kube-api-access-67zr8\") pod \"machine-config-daemon-89kpx\" (UID: \"157c6f6c-042b-4da3-934e-a08474e56486\") " pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.418558 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.418609 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.418623 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.418640 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.418652 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:20Z","lastTransitionTime":"2025-12-09T11:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.423813 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jm22\" (UniqueName: \"kubernetes.io/projected/205e41c5-82b8-4bac-a27a-49f1e0da94e5-kube-api-access-5jm22\") pod \"ovnkube-node-6hf97\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.438396 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.440933 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grflc\" (UniqueName: \"kubernetes.io/projected/de61302b-e1bc-4372-8485-36b4fde18e80-kube-api-access-grflc\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.456352 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.476723 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.493580 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.521207 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.521249 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.521258 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.521272 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.521281 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:20Z","lastTransitionTime":"2025-12-09T11:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.534195 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.539124 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.539267 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.539356 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.539437 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.539498 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.539558 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.541268 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.542210 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.543610 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.544389 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.545760 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.546452 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.547201 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.548511 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.549312 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.553928 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.554721 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.556105 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.556821 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.557747 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.558951 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.559637 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.560929 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.561522 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.562210 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.562602 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.565139 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.565842 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.567120 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.567806 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.569141 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.569766 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.570653 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.572187 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.574201 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.575090 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.576277 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.577185 4849 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.577315 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.579963 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.580658 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.581237 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.583632 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.585000 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.585035 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.585759 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.587022 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.587906 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.589051 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.589872 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.591189 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.592560 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.593247 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.594002 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.595114 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.596263 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.597397 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.598096 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.599259 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.599956 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.600766 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.601946 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.614779 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.623741 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.623788 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.623802 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.623820 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.623831 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:20Z","lastTransitionTime":"2025-12-09T11:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.628173 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.640105 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa7d7c03dadfe2511eb4d748fd301cfa01cd417802e55ed01350084935c87138\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:12Z\\\",\\\"message\\\":\\\"W1209 11:27:01.702204 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1209 11:27:01.702474 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765279621 cert, and key in /tmp/serving-cert-3710463663/serving-signer.crt, /tmp/serving-cert-3710463663/serving-signer.key\\\\nI1209 11:27:01.827808 1 observer_polling.go:159] Starting file observer\\\\nW1209 11:27:01.829720 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1209 11:27:01.829870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:01.830558 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3710463663/tls.crt::/tmp/serving-cert-3710463663/tls.key\\\\\\\"\\\\nF1209 11:27:12.200958 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:20 crc kubenswrapper[4849]: W1209 11:27:20.640492 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod157c6f6c_042b_4da3_934e_a08474e56486.slice/crio-8f0592212a2e5a4be2fdba1e57154ab0f431f7f92d82b88a978eff782b3c4e87 WatchSource:0}: Error finding container 8f0592212a2e5a4be2fdba1e57154ab0f431f7f92d82b88a978eff782b3c4e87: Status 404 returned error can't find the container with id 8f0592212a2e5a4be2fdba1e57154ab0f431f7f92d82b88a978eff782b3c4e87 Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.647793 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.659616 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.681716 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.706731 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:20Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.722852 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:20Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.726352 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.726393 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.726402 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.726434 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.726446 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:20Z","lastTransitionTime":"2025-12-09T11:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.738986 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:20Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.756005 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:20Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.767552 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:20Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.829020 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.829063 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.829074 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.829090 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.829103 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:20Z","lastTransitionTime":"2025-12-09T11:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.875648 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerStarted","Data":"e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71"} Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.875693 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerStarted","Data":"8f0592212a2e5a4be2fdba1e57154ab0f431f7f92d82b88a978eff782b3c4e87"} Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.878909 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.881665 4849 scope.go:117] "RemoveContainer" containerID="8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b" Dec 09 11:27:20 crc kubenswrapper[4849]: E1209 11:27:20.881819 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.882842 4849 generic.go:334] "Generic (PLEG): container finished" podID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerID="36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4" exitCode=0 Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.882936 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerDied","Data":"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4"} Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.882995 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerStarted","Data":"62b7e5b3ecf19025402a615a9915c157769ded09a0e0621db3b71c90fd21c5b7"} Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.886499 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h76bl" event={"ID":"e5c6e29f-6131-4daa-b297-81eb53e7384c","Type":"ContainerStarted","Data":"362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40"} Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.893811 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:20Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.907850 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:20Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.932083 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.932119 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.932131 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.932150 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.932161 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:20Z","lastTransitionTime":"2025-12-09T11:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.933975 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:20Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.948872 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:20Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.963472 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:20Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.979160 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.980272 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:20Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:20 crc kubenswrapper[4849]: I1209 11:27:20.995578 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:20Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.008178 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.021527 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.031914 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.034462 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.034497 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.034508 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.034521 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.034533 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:21Z","lastTransitionTime":"2025-12-09T11:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.042762 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.054608 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.076178 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.087310 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.099308 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.118246 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.129467 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.137203 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.137252 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.137263 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.137283 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.137297 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:21Z","lastTransitionTime":"2025-12-09T11:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.143726 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.163669 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.184428 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.199470 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: E1209 11:27:21.201429 4849 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Dec 09 11:27:21 crc kubenswrapper[4849]: E1209 11:27:21.201507 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de61302b-e1bc-4372-8485-36b4fde18e80-cni-sysctl-allowlist podName:de61302b-e1bc-4372-8485-36b4fde18e80 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:21.701484899 +0000 UTC m=+24.241369225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/de61302b-e1bc-4372-8485-36b4fde18e80-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-lwsgz" (UID: "de61302b-e1bc-4372-8485-36b4fde18e80") : failed to sync configmap cache: timed out waiting for the condition Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.212650 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.228910 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.242091 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.242141 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.242153 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.242173 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.242185 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:21Z","lastTransitionTime":"2025-12-09T11:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.264257 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.304671 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.325966 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.345816 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.345858 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.345870 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.345886 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.345897 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:21Z","lastTransitionTime":"2025-12-09T11:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.456035 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.456105 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.456121 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.456141 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.456154 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:21Z","lastTransitionTime":"2025-12-09T11:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.520103 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.558819 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.558863 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.558872 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.558890 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.558900 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:21Z","lastTransitionTime":"2025-12-09T11:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.664484 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.664533 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.664551 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.664571 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.664585 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:21Z","lastTransitionTime":"2025-12-09T11:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.720251 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/de61302b-e1bc-4372-8485-36b4fde18e80-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.720834 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/de61302b-e1bc-4372-8485-36b4fde18e80-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lwsgz\" (UID: \"de61302b-e1bc-4372-8485-36b4fde18e80\") " pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.767118 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.767145 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.767153 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.767166 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.767174 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:21Z","lastTransitionTime":"2025-12-09T11:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.777881 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qrt6l"] Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.778216 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qrt6l" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.780093 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.780439 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.780595 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.781765 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.794831 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.807374 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.820240 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.820770 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9fe9f884-b4dd-4a85-8554-ad36d1ab3b69-host\") pod \"node-ca-qrt6l\" (UID: \"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\") " pod="openshift-image-registry/node-ca-qrt6l" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.820805 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxjpc\" (UniqueName: \"kubernetes.io/projected/9fe9f884-b4dd-4a85-8554-ad36d1ab3b69-kube-api-access-xxjpc\") pod \"node-ca-qrt6l\" (UID: \"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\") " pod="openshift-image-registry/node-ca-qrt6l" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.821029 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9fe9f884-b4dd-4a85-8554-ad36d1ab3b69-serviceca\") pod \"node-ca-qrt6l\" (UID: \"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\") " pod="openshift-image-registry/node-ca-qrt6l" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.835165 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.849209 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.854228 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.870381 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.871271 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.871295 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.871304 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.871318 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.871344 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:21Z","lastTransitionTime":"2025-12-09T11:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.887583 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.890949 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerStarted","Data":"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090"} Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.891006 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerStarted","Data":"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5"} Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.891970 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" event={"ID":"de61302b-e1bc-4372-8485-36b4fde18e80","Type":"ContainerStarted","Data":"961b3c96286c277680f5a0b88417a6b458fbdd35a01323bb969205d63557d50d"} Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.895129 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerStarted","Data":"233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3"} Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.921474 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9fe9f884-b4dd-4a85-8554-ad36d1ab3b69-host\") pod \"node-ca-qrt6l\" (UID: \"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\") " pod="openshift-image-registry/node-ca-qrt6l" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.921526 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxjpc\" (UniqueName: \"kubernetes.io/projected/9fe9f884-b4dd-4a85-8554-ad36d1ab3b69-kube-api-access-xxjpc\") pod \"node-ca-qrt6l\" (UID: \"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\") " pod="openshift-image-registry/node-ca-qrt6l" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.921474 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.921604 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9fe9f884-b4dd-4a85-8554-ad36d1ab3b69-serviceca\") pod \"node-ca-qrt6l\" (UID: \"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\") " pod="openshift-image-registry/node-ca-qrt6l" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.921856 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9fe9f884-b4dd-4a85-8554-ad36d1ab3b69-host\") pod \"node-ca-qrt6l\" (UID: \"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\") " pod="openshift-image-registry/node-ca-qrt6l" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.923360 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9fe9f884-b4dd-4a85-8554-ad36d1ab3b69-serviceca\") pod \"node-ca-qrt6l\" (UID: \"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\") " pod="openshift-image-registry/node-ca-qrt6l" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.941482 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.947052 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxjpc\" (UniqueName: \"kubernetes.io/projected/9fe9f884-b4dd-4a85-8554-ad36d1ab3b69-kube-api-access-xxjpc\") pod \"node-ca-qrt6l\" (UID: \"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\") " pod="openshift-image-registry/node-ca-qrt6l" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.962078 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.973498 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.973520 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.973528 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.973549 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.973558 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:21Z","lastTransitionTime":"2025-12-09T11:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.984638 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:21 crc kubenswrapper[4849]: I1209 11:27:21.999452 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:21Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.011145 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.035631 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.052261 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.066206 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.076026 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.076066 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.076076 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.076090 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.076099 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:22Z","lastTransitionTime":"2025-12-09T11:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.090361 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qrt6l" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.090865 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.118389 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.129671 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.160511 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.180369 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.180401 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.180421 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.180433 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.180442 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:22Z","lastTransitionTime":"2025-12-09T11:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.181487 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.196020 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.209558 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.221329 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.234790 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.248539 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.270431 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.281987 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.282026 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.282038 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.282055 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.282075 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:22Z","lastTransitionTime":"2025-12-09T11:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.292710 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.325111 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.325325 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:27:26.325306039 +0000 UTC m=+28.865190355 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.385220 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.385262 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.385273 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.385287 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.385297 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:22Z","lastTransitionTime":"2025-12-09T11:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.436518 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.436597 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.436623 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.436644 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.436718 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.436758 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.436771 4849 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.436810 4849 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.436821 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:26.43680517 +0000 UTC m=+28.976689476 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.436745 4849 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.436865 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:26.436848681 +0000 UTC m=+28.976733087 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.436886 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:26.436875941 +0000 UTC m=+28.976760387 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.436900 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.436909 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.436916 4849 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.436936 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:26.436929584 +0000 UTC m=+28.976813900 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.487381 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.487432 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.487446 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.487459 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.487468 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:22Z","lastTransitionTime":"2025-12-09T11:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.542313 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.542430 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.542487 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.542530 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.542569 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:22 crc kubenswrapper[4849]: E1209 11:27:22.542606 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.589531 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.589575 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.589584 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.589599 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.589608 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:22Z","lastTransitionTime":"2025-12-09T11:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.696929 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.696958 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.696969 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.696984 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.696995 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:22Z","lastTransitionTime":"2025-12-09T11:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.798945 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.799959 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.800053 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.800141 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.800248 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:22Z","lastTransitionTime":"2025-12-09T11:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.900269 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerStarted","Data":"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.900316 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerStarted","Data":"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.900361 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerStarted","Data":"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.900373 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerStarted","Data":"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.901662 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.901686 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.901683 4849 generic.go:334] "Generic (PLEG): container finished" podID="de61302b-e1bc-4372-8485-36b4fde18e80" containerID="e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a" exitCode=0 Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.901741 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" event={"ID":"de61302b-e1bc-4372-8485-36b4fde18e80","Type":"ContainerDied","Data":"e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.901694 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.901783 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.901795 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:22Z","lastTransitionTime":"2025-12-09T11:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.904188 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qrt6l" event={"ID":"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69","Type":"ContainerStarted","Data":"e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.904247 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qrt6l" event={"ID":"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69","Type":"ContainerStarted","Data":"4a8af06037752d047cd346cc86ea9b0bb909a8821293dae9fc1376148cc432ab"} Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.915160 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.927976 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.945560 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.956984 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.966316 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:22 crc kubenswrapper[4849]: I1209 11:27:22.978513 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.001541 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:22Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.003728 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.003760 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.003768 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.003784 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.003793 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:23Z","lastTransitionTime":"2025-12-09T11:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.017035 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.033199 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.054879 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.066433 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.078237 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.098950 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.105900 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.105925 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.105933 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.105947 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.105958 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:23Z","lastTransitionTime":"2025-12-09T11:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.113920 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.127513 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.145830 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.154347 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.173212 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.184283 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.198645 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.207845 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.208096 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.208105 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.208118 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.208127 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:23Z","lastTransitionTime":"2025-12-09T11:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.213156 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.227993 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.239992 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.261048 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.272671 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.289281 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.307326 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.310173 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.310197 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.310205 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.310219 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.310227 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:23Z","lastTransitionTime":"2025-12-09T11:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.321552 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.403692 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.404429 4849 scope.go:117] "RemoveContainer" containerID="8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b" Dec 09 11:27:23 crc kubenswrapper[4849]: E1209 11:27:23.404645 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.412819 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.412846 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.412855 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.412868 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.412877 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:23Z","lastTransitionTime":"2025-12-09T11:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.515564 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.515618 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.515641 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.515669 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.515691 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:23Z","lastTransitionTime":"2025-12-09T11:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.618060 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.618081 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.618089 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.618101 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.618109 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:23Z","lastTransitionTime":"2025-12-09T11:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.719806 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.719837 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.719846 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.719860 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.719869 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:23Z","lastTransitionTime":"2025-12-09T11:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.821830 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.821875 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.821891 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.821910 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.821924 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:23Z","lastTransitionTime":"2025-12-09T11:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.908317 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" event={"ID":"de61302b-e1bc-4372-8485-36b4fde18e80","Type":"ContainerStarted","Data":"ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b"} Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.910057 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1"} Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.922092 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.935195 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.937130 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.937174 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.937184 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.937202 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.937214 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:23Z","lastTransitionTime":"2025-12-09T11:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.950215 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.962845 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.980200 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:23 crc kubenswrapper[4849]: I1209 11:27:23.999216 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.017650 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.039706 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.039761 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.039776 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.039819 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.039832 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:24Z","lastTransitionTime":"2025-12-09T11:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.047700 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.062498 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.078083 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.111214 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.123088 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.134886 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.141996 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.142034 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.142045 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.142059 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.142069 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:24Z","lastTransitionTime":"2025-12-09T11:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.151826 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.171364 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.185810 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.199056 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.222159 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.237360 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.244843 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.244878 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.244887 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.244903 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.244913 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:24Z","lastTransitionTime":"2025-12-09T11:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.249061 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.266830 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.277530 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.291942 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.305158 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.318826 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.331997 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.346905 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.346945 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.346955 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.346971 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.346980 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:24Z","lastTransitionTime":"2025-12-09T11:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.352060 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.365028 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.449251 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.449303 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.449316 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.449334 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.449348 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:24Z","lastTransitionTime":"2025-12-09T11:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.536025 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.536110 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.536135 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:24 crc kubenswrapper[4849]: E1209 11:27:24.536713 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:24 crc kubenswrapper[4849]: E1209 11:27:24.536921 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:24 crc kubenswrapper[4849]: E1209 11:27:24.537088 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.551754 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.552045 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.552170 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.552273 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.552438 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:24Z","lastTransitionTime":"2025-12-09T11:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.656005 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.656498 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.656621 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.656728 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.656826 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:24Z","lastTransitionTime":"2025-12-09T11:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.759565 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.759597 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.759609 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.759626 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.759640 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:24Z","lastTransitionTime":"2025-12-09T11:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.862990 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.863064 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.863086 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.863118 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.863135 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:24Z","lastTransitionTime":"2025-12-09T11:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.918527 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerStarted","Data":"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e"} Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.920568 4849 generic.go:334] "Generic (PLEG): container finished" podID="de61302b-e1bc-4372-8485-36b4fde18e80" containerID="ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b" exitCode=0 Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.920673 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" event={"ID":"de61302b-e1bc-4372-8485-36b4fde18e80","Type":"ContainerDied","Data":"ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b"} Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.943604 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.959206 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.966135 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.966191 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.966204 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.966227 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.966243 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:24Z","lastTransitionTime":"2025-12-09T11:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.973508 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:24 crc kubenswrapper[4849]: I1209 11:27:24.986504 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.000614 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:24Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.016683 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.026299 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.043212 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.057036 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.067728 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.067767 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.067780 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.067799 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.067845 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:25Z","lastTransitionTime":"2025-12-09T11:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.069864 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.089020 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.101686 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.117172 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.129111 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.170788 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.170850 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.170866 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.170891 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.170905 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:25Z","lastTransitionTime":"2025-12-09T11:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.274230 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.274277 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.274291 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.274312 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.274326 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:25Z","lastTransitionTime":"2025-12-09T11:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.377077 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.377130 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.377140 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.377161 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.377173 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:25Z","lastTransitionTime":"2025-12-09T11:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.444392 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.449288 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.458242 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.470098 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.479302 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.479362 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.479376 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.479396 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.479425 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:25Z","lastTransitionTime":"2025-12-09T11:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.486458 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.499795 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.516052 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.531182 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.547507 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.566342 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.576789 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.581355 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.581395 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.581440 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.581457 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.581468 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:25Z","lastTransitionTime":"2025-12-09T11:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.591935 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.605212 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.618859 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.629925 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.644322 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.654127 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.665674 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.677089 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.684967 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.685022 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.685036 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.685056 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.685069 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:25Z","lastTransitionTime":"2025-12-09T11:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.692836 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.709444 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.728776 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.745961 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.760744 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.781605 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.787839 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.787886 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.787898 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.787918 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.787933 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:25Z","lastTransitionTime":"2025-12-09T11:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.800525 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.813350 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.828130 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.843775 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.860710 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.875457 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.891434 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.891483 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.891497 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.891521 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.891541 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:25Z","lastTransitionTime":"2025-12-09T11:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.904168 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.927910 4849 generic.go:334] "Generic (PLEG): container finished" podID="de61302b-e1bc-4372-8485-36b4fde18e80" containerID="7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd" exitCode=0 Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.928012 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" event={"ID":"de61302b-e1bc-4372-8485-36b4fde18e80","Type":"ContainerDied","Data":"7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd"} Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.959795 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.982605 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.998753 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.998784 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.998792 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:25 crc kubenswrapper[4849]: I1209 11:27:25.998806 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:25.998826 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:25Z","lastTransitionTime":"2025-12-09T11:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.002134 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.024684 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:26Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.045317 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:26Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.063564 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:26Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.077120 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:26Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.095233 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:26Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.101530 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.101562 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.101595 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.101613 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.101625 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:26Z","lastTransitionTime":"2025-12-09T11:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.115717 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:26Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.133574 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:26Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.148061 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:26Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.161431 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:26Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.175550 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:26Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.188842 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:26Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.198712 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:26Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.204324 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.204350 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.204359 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.204372 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.204381 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:26Z","lastTransitionTime":"2025-12-09T11:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.307351 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.307387 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.307403 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.307451 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.307467 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:26Z","lastTransitionTime":"2025-12-09T11:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.384316 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.384545 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:27:34.384518904 +0000 UTC m=+36.924403240 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.410554 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.410592 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.410604 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.410620 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.410631 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:26Z","lastTransitionTime":"2025-12-09T11:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.485795 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.485844 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.485869 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.485919 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.486018 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.486019 4849 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.486077 4849 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.486085 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.486046 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.486106 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.486122 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:34.486105778 +0000 UTC m=+37.025990094 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.486123 4849 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.486122 4849 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.486136 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:34.486130929 +0000 UTC m=+37.026015245 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.486226 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:34.486200291 +0000 UTC m=+37.026084647 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.486268 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:34.486252192 +0000 UTC m=+37.026136628 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.512953 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.513011 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.513037 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.513067 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.513088 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:26Z","lastTransitionTime":"2025-12-09T11:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.544771 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.544996 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.545141 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.545268 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.545371 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:26 crc kubenswrapper[4849]: E1209 11:27:26.545533 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.615627 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.615671 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.615683 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.615699 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.615711 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:26Z","lastTransitionTime":"2025-12-09T11:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.718625 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.718672 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.718683 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.718700 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.718714 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:26Z","lastTransitionTime":"2025-12-09T11:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.820377 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.820425 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.820435 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.820450 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.820461 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:26Z","lastTransitionTime":"2025-12-09T11:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.924828 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.925104 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.925371 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.925385 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.925394 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:26Z","lastTransitionTime":"2025-12-09T11:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.934532 4849 generic.go:334] "Generic (PLEG): container finished" podID="de61302b-e1bc-4372-8485-36b4fde18e80" containerID="59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10" exitCode=0 Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.934582 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" event={"ID":"de61302b-e1bc-4372-8485-36b4fde18e80","Type":"ContainerDied","Data":"59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10"} Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.967666 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:26Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:26 crc kubenswrapper[4849]: I1209 11:27:26.980431 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:26Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.007461 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.026037 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.030776 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.030831 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.030841 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.030856 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.030866 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:27Z","lastTransitionTime":"2025-12-09T11:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.042806 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.064175 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.078675 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.100858 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.111897 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.123695 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.133150 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.133176 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.133187 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.133200 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.133209 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:27Z","lastTransitionTime":"2025-12-09T11:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.155019 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.196034 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.223996 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.235484 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.235509 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.235517 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.235529 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.235537 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:27Z","lastTransitionTime":"2025-12-09T11:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.244470 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.257051 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.337742 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.337789 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.337799 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.337815 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.337831 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:27Z","lastTransitionTime":"2025-12-09T11:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.440320 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.440379 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.440397 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.440439 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.440467 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:27Z","lastTransitionTime":"2025-12-09T11:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.542823 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.542861 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.542869 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.542882 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.542891 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:27Z","lastTransitionTime":"2025-12-09T11:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.644547 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.644579 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.644589 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.644612 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.644623 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:27Z","lastTransitionTime":"2025-12-09T11:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.749737 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.750046 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.750060 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.750077 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.750088 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:27Z","lastTransitionTime":"2025-12-09T11:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.755265 4849 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.853766 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.853801 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.853811 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.853826 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.853837 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:27Z","lastTransitionTime":"2025-12-09T11:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.946602 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerStarted","Data":"03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056"} Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.946914 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.946938 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.950945 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" event={"ID":"de61302b-e1bc-4372-8485-36b4fde18e80","Type":"ContainerStarted","Data":"eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf"} Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.956061 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.956118 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.956135 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.956158 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.956175 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:27Z","lastTransitionTime":"2025-12-09T11:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.969135 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.980678 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.982164 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.983487 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:27 crc kubenswrapper[4849]: I1209 11:27:27.996752 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:27Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.008394 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.023198 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.044004 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.058757 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.058790 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.058800 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.058814 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.058825 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:28Z","lastTransitionTime":"2025-12-09T11:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.064921 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.084486 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.097439 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.108287 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.130104 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.140733 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.156532 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.162813 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.162859 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.162870 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.162888 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.162905 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:28Z","lastTransitionTime":"2025-12-09T11:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.171566 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.184025 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.196885 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.219160 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.230980 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.242296 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.257056 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.264978 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.265029 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.265046 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.265070 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.265092 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:28Z","lastTransitionTime":"2025-12-09T11:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.281939 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.292440 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.292480 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.292489 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.292504 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.292514 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:28Z","lastTransitionTime":"2025-12-09T11:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.302297 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: E1209 11:27:28.307863 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.312768 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.312835 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.312847 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.312882 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.312896 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:28Z","lastTransitionTime":"2025-12-09T11:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.324498 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: E1209 11:27:28.328676 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.332424 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.332609 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.332707 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.332807 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.332902 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:28Z","lastTransitionTime":"2025-12-09T11:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.341096 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: E1209 11:27:28.347096 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.350886 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.350921 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.350929 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.350943 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.350952 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:28Z","lastTransitionTime":"2025-12-09T11:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.356318 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: E1209 11:27:28.365478 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.369556 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.369579 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.369589 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.369602 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.369611 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:28Z","lastTransitionTime":"2025-12-09T11:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.369612 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: E1209 11:27:28.380572 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: E1209 11:27:28.380733 4849 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.381226 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.382471 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.382493 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.382501 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.382514 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.382523 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:28Z","lastTransitionTime":"2025-12-09T11:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.395878 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.410490 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.421866 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.484948 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.484994 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.485007 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.485027 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.485042 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:28Z","lastTransitionTime":"2025-12-09T11:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.536114 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:28 crc kubenswrapper[4849]: E1209 11:27:28.536459 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.536588 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:28 crc kubenswrapper[4849]: E1209 11:27:28.536698 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.536898 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:28 crc kubenswrapper[4849]: E1209 11:27:28.537001 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.571553 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.586246 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.587436 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.587462 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.587475 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.587494 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.587507 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:28Z","lastTransitionTime":"2025-12-09T11:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.602891 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.619861 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.639089 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.658580 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.677196 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.689386 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.689449 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.689461 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.689493 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.689503 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:28Z","lastTransitionTime":"2025-12-09T11:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.690304 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.701367 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.711353 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.722573 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.737097 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.751638 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.766666 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.778875 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.791095 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.791134 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.791145 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.791161 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.791171 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:28Z","lastTransitionTime":"2025-12-09T11:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.894016 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.894259 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.894319 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.894377 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.894458 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:28Z","lastTransitionTime":"2025-12-09T11:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.960555 4849 generic.go:334] "Generic (PLEG): container finished" podID="de61302b-e1bc-4372-8485-36b4fde18e80" containerID="eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf" exitCode=0 Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.960884 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" event={"ID":"de61302b-e1bc-4372-8485-36b4fde18e80","Type":"ContainerDied","Data":"eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf"} Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.963071 4849 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 11:27:28 crc kubenswrapper[4849]: I1209 11:27:28.995851 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.000841 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.000958 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.001012 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.001035 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.001049 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:29Z","lastTransitionTime":"2025-12-09T11:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.012660 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:29Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.031284 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:29Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.047278 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:29Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.065271 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:29Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.079586 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:29Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.090654 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:29Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.110270 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.110298 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.110306 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.110319 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.110329 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:29Z","lastTransitionTime":"2025-12-09T11:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.113324 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:29Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.124473 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:29Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.138781 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:29Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.150138 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:29Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.161272 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:29Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.176241 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:29Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.193297 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:29Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.212231 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:29Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.214974 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.215062 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.215125 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.215186 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.215247 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:29Z","lastTransitionTime":"2025-12-09T11:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.317064 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.317097 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.317106 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.317121 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.317131 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:29Z","lastTransitionTime":"2025-12-09T11:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.419809 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.419844 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.419853 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.419868 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.419877 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:29Z","lastTransitionTime":"2025-12-09T11:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.523054 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.523083 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.523093 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.523108 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.523118 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:29Z","lastTransitionTime":"2025-12-09T11:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.627511 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.627566 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.627578 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.627594 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.627606 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:29Z","lastTransitionTime":"2025-12-09T11:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.730804 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.730847 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.730855 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.730869 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.730880 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:29Z","lastTransitionTime":"2025-12-09T11:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.833910 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.833956 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.833970 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.833992 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.834007 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:29Z","lastTransitionTime":"2025-12-09T11:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.936992 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.937027 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.937035 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.937062 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.937072 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:29Z","lastTransitionTime":"2025-12-09T11:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.975927 4849 generic.go:334] "Generic (PLEG): container finished" podID="de61302b-e1bc-4372-8485-36b4fde18e80" containerID="cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25" exitCode=0 Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.976107 4849 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.977385 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" event={"ID":"de61302b-e1bc-4372-8485-36b4fde18e80","Type":"ContainerDied","Data":"cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25"} Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.993622 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:29Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:29 crc kubenswrapper[4849]: I1209 11:27:29.998983 4849 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.007220 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:30Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.025087 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:30Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.039809 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.039834 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.039842 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.039854 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.039863 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:30Z","lastTransitionTime":"2025-12-09T11:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.053816 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:30Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.073059 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:30Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.087092 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:30Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.099403 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:30Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.126980 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:30Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.139349 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:30Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.142249 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.142278 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.142286 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.142299 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.142309 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:30Z","lastTransitionTime":"2025-12-09T11:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.155292 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:30Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.173861 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:30Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.185953 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:30Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.198591 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:30Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.217021 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:30Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.231326 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:30Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.244637 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.244669 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.244678 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.244691 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.244700 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:30Z","lastTransitionTime":"2025-12-09T11:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.346707 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.346760 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.346773 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.346790 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.346801 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:30Z","lastTransitionTime":"2025-12-09T11:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.448841 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.448873 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.448881 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.448895 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.448903 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:30Z","lastTransitionTime":"2025-12-09T11:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.540227 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:30 crc kubenswrapper[4849]: E1209 11:27:30.540755 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.540522 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:30 crc kubenswrapper[4849]: E1209 11:27:30.541228 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.542519 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:30 crc kubenswrapper[4849]: E1209 11:27:30.542655 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.550911 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.551096 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.551165 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.551227 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.551280 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:30Z","lastTransitionTime":"2025-12-09T11:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.653334 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.653384 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.653400 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.653441 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.653456 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:30Z","lastTransitionTime":"2025-12-09T11:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.755931 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.755975 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.755986 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.756001 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.756012 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:30Z","lastTransitionTime":"2025-12-09T11:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.859043 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.859100 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.859116 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.859138 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.859154 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:30Z","lastTransitionTime":"2025-12-09T11:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.961456 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.961490 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.961498 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.961512 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.961523 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:30Z","lastTransitionTime":"2025-12-09T11:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.981254 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/0.log" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.984369 4849 generic.go:334] "Generic (PLEG): container finished" podID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerID="03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056" exitCode=1 Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.984401 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerDied","Data":"03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056"} Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.985122 4849 scope.go:117] "RemoveContainer" containerID="03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056" Dec 09 11:27:30 crc kubenswrapper[4849]: I1209 11:27:30.988631 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" event={"ID":"de61302b-e1bc-4372-8485-36b4fde18e80","Type":"ContainerStarted","Data":"acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35"} Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.010295 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.025272 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.046316 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.065056 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.065098 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.065111 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.065128 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.065139 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:31Z","lastTransitionTime":"2025-12-09T11:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.069249 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.083788 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.096758 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.107353 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.130049 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"message\\\":\\\"tory.go:160\\\\nI1209 11:27:30.604690 5984 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.604909 5984 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605106 5984 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605492 5984 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 11:27:30.605513 5984 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 11:27:30.605535 5984 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 11:27:30.605560 5984 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 11:27:30.605576 5984 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 11:27:30.605596 5984 factory.go:656] Stopping watch factory\\\\nI1209 11:27:30.605616 5984 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 11:27:30.605631 5984 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 11:27:30.605648 5984 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 11:27:30.605658 5984 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 11:27:30.605665 5984 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.142634 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.157798 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.167550 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.167600 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.167611 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.167629 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.167641 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:31Z","lastTransitionTime":"2025-12-09T11:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.174760 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.188156 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.204609 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.225060 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.238608 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.256890 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.270486 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.270554 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.270567 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.270582 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.270593 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:31Z","lastTransitionTime":"2025-12-09T11:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.277604 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"message\\\":\\\"tory.go:160\\\\nI1209 11:27:30.604690 5984 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.604909 5984 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605106 5984 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605492 5984 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 11:27:30.605513 5984 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 11:27:30.605535 5984 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 11:27:30.605560 5984 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 11:27:30.605576 5984 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 11:27:30.605596 5984 factory.go:656] Stopping watch factory\\\\nI1209 11:27:30.605616 5984 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 11:27:30.605631 5984 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 11:27:30.605648 5984 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 11:27:30.605658 5984 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 11:27:30.605665 5984 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.292989 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.306232 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.328231 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.343235 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.356610 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.369373 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.373840 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.373869 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.373881 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.373896 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.373906 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:31Z","lastTransitionTime":"2025-12-09T11:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.382324 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.401806 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.413737 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.426031 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.448495 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.460920 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.474216 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:31Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.475685 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.475713 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.475721 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.475736 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.475746 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:31Z","lastTransitionTime":"2025-12-09T11:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.578283 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.578320 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.578329 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.578343 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.578353 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:31Z","lastTransitionTime":"2025-12-09T11:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.680521 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.680559 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.680569 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.680584 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.680594 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:31Z","lastTransitionTime":"2025-12-09T11:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.783793 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.783866 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.783883 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.783902 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.783913 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:31Z","lastTransitionTime":"2025-12-09T11:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.886609 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.886665 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.886681 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.886706 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.886723 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:31Z","lastTransitionTime":"2025-12-09T11:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.989727 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.989779 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.989788 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.989801 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.989814 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:31Z","lastTransitionTime":"2025-12-09T11:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.994005 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/0.log" Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.996832 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerStarted","Data":"ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e"} Dec 09 11:27:31 crc kubenswrapper[4849]: I1209 11:27:31.996944 4849 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.012627 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.027633 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.042022 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.067845 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"message\\\":\\\"tory.go:160\\\\nI1209 11:27:30.604690 5984 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.604909 5984 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605106 5984 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605492 5984 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 11:27:30.605513 5984 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 11:27:30.605535 5984 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 11:27:30.605560 5984 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 11:27:30.605576 5984 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 11:27:30.605596 5984 factory.go:656] Stopping watch factory\\\\nI1209 11:27:30.605616 5984 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 11:27:30.605631 5984 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 11:27:30.605648 5984 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 11:27:30.605658 5984 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 11:27:30.605665 5984 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.086834 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.091790 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.091822 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.091832 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.091848 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.091859 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:32Z","lastTransitionTime":"2025-12-09T11:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.103281 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.119196 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.132225 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.145319 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.159262 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.173831 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.193802 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.193841 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.193851 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.193866 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.193876 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:32Z","lastTransitionTime":"2025-12-09T11:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.198628 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.212099 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.228605 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.245629 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.301722 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.302068 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.302131 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.302208 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.302326 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:32Z","lastTransitionTime":"2025-12-09T11:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.414313 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.414348 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.414356 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.414373 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.414384 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:32Z","lastTransitionTime":"2025-12-09T11:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.517092 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.517124 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.517132 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.517144 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.517152 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:32Z","lastTransitionTime":"2025-12-09T11:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.535441 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.535480 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.535455 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:32 crc kubenswrapper[4849]: E1209 11:27:32.535601 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:32 crc kubenswrapper[4849]: E1209 11:27:32.535703 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:32 crc kubenswrapper[4849]: E1209 11:27:32.535807 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.619096 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.619128 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.619136 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.619151 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.619161 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:32Z","lastTransitionTime":"2025-12-09T11:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.721647 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.721688 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.721699 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.721716 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.721727 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:32Z","lastTransitionTime":"2025-12-09T11:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.823867 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.823898 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.823906 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.823920 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.823930 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:32Z","lastTransitionTime":"2025-12-09T11:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.926117 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.926161 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.926184 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.926203 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:32 crc kubenswrapper[4849]: I1209 11:27:32.926215 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:32Z","lastTransitionTime":"2025-12-09T11:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.002373 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/1.log" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.002994 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/0.log" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.006560 4849 generic.go:334] "Generic (PLEG): container finished" podID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerID="ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e" exitCode=1 Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.006617 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerDied","Data":"ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e"} Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.006721 4849 scope.go:117] "RemoveContainer" containerID="03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.007322 4849 scope.go:117] "RemoveContainer" containerID="ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e" Dec 09 11:27:33 crc kubenswrapper[4849]: E1209 11:27:33.007470 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6hf97_openshift-ovn-kubernetes(205e41c5-82b8-4bac-a27a-49f1e0da94e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.028580 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.028831 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.028941 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.029081 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.029189 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:33Z","lastTransitionTime":"2025-12-09T11:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.040116 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.055351 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.067345 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.084673 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.101069 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.108964 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf"] Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.110086 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.111814 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.111973 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.115758 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.128058 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.131550 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.131602 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.131618 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.131640 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.131680 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:33Z","lastTransitionTime":"2025-12-09T11:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.147005 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"message\\\":\\\"tory.go:160\\\\nI1209 11:27:30.604690 5984 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.604909 5984 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605106 5984 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605492 5984 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 11:27:30.605513 5984 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 11:27:30.605535 5984 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 11:27:30.605560 5984 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 11:27:30.605576 5984 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 11:27:30.605596 5984 factory.go:656] Stopping watch factory\\\\nI1209 11:27:30.605616 5984 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 11:27:30.605631 5984 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 11:27:30.605648 5984 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 11:27:30.605658 5984 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 11:27:30.605665 5984 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:32Z\\\",\\\"message\\\":\\\" openshift-multus/multus-h76bl\\\\nI1209 11:27:32.322288 6182 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1209 11:27:32.322290 6182 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1209 11:27:32.322293 6182 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-h76bl in node crc\\\\nF1209 11:27:32.322295 6182 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.156742 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.170351 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.183788 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.196206 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.210834 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.219962 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e92bfd32-e3db-4e27-a677-1661aad91e1a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-n9ndf\" (UID: \"e92bfd32-e3db-4e27-a677-1661aad91e1a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.220029 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg9fg\" (UniqueName: \"kubernetes.io/projected/e92bfd32-e3db-4e27-a677-1661aad91e1a-kube-api-access-mg9fg\") pod \"ovnkube-control-plane-749d76644c-n9ndf\" (UID: \"e92bfd32-e3db-4e27-a677-1661aad91e1a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.220059 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e92bfd32-e3db-4e27-a677-1661aad91e1a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-n9ndf\" (UID: \"e92bfd32-e3db-4e27-a677-1661aad91e1a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.220207 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e92bfd32-e3db-4e27-a677-1661aad91e1a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-n9ndf\" (UID: \"e92bfd32-e3db-4e27-a677-1661aad91e1a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.226486 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.235586 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.235632 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.235645 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.235670 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.235685 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:33Z","lastTransitionTime":"2025-12-09T11:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.241020 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.252437 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.265952 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.280743 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.294074 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.307542 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.320629 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.320855 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e92bfd32-e3db-4e27-a677-1661aad91e1a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-n9ndf\" (UID: \"e92bfd32-e3db-4e27-a677-1661aad91e1a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.320889 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg9fg\" (UniqueName: \"kubernetes.io/projected/e92bfd32-e3db-4e27-a677-1661aad91e1a-kube-api-access-mg9fg\") pod \"ovnkube-control-plane-749d76644c-n9ndf\" (UID: \"e92bfd32-e3db-4e27-a677-1661aad91e1a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.320909 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e92bfd32-e3db-4e27-a677-1661aad91e1a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-n9ndf\" (UID: \"e92bfd32-e3db-4e27-a677-1661aad91e1a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.320931 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e92bfd32-e3db-4e27-a677-1661aad91e1a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-n9ndf\" (UID: \"e92bfd32-e3db-4e27-a677-1661aad91e1a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.321393 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e92bfd32-e3db-4e27-a677-1661aad91e1a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-n9ndf\" (UID: \"e92bfd32-e3db-4e27-a677-1661aad91e1a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.321826 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e92bfd32-e3db-4e27-a677-1661aad91e1a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-n9ndf\" (UID: \"e92bfd32-e3db-4e27-a677-1661aad91e1a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.325875 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e92bfd32-e3db-4e27-a677-1661aad91e1a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-n9ndf\" (UID: \"e92bfd32-e3db-4e27-a677-1661aad91e1a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.332680 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.338068 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.338129 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.338153 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.338182 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.338199 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:33Z","lastTransitionTime":"2025-12-09T11:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.338588 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg9fg\" (UniqueName: \"kubernetes.io/projected/e92bfd32-e3db-4e27-a677-1661aad91e1a-kube-api-access-mg9fg\") pod \"ovnkube-control-plane-749d76644c-n9ndf\" (UID: \"e92bfd32-e3db-4e27-a677-1661aad91e1a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.351383 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.362814 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.378933 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.394122 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.407030 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.421344 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.422948 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.441588 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.441628 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.441639 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.441655 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.441669 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:33Z","lastTransitionTime":"2025-12-09T11:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.444507 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.457688 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.476695 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"message\\\":\\\"tory.go:160\\\\nI1209 11:27:30.604690 5984 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.604909 5984 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605106 5984 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605492 5984 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 11:27:30.605513 5984 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 11:27:30.605535 5984 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 11:27:30.605560 5984 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 11:27:30.605576 5984 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 11:27:30.605596 5984 factory.go:656] Stopping watch factory\\\\nI1209 11:27:30.605616 5984 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 11:27:30.605631 5984 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 11:27:30.605648 5984 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 11:27:30.605658 5984 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 11:27:30.605665 5984 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:32Z\\\",\\\"message\\\":\\\" openshift-multus/multus-h76bl\\\\nI1209 11:27:32.322288 6182 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1209 11:27:32.322290 6182 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1209 11:27:32.322293 6182 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-h76bl in node crc\\\\nF1209 11:27:32.322295 6182 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.544058 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.544098 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.544113 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.544130 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.544150 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:33Z","lastTransitionTime":"2025-12-09T11:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.647404 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.647444 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.647452 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.647468 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.647478 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:33Z","lastTransitionTime":"2025-12-09T11:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.751186 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.751257 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.751280 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.751309 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.751332 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:33Z","lastTransitionTime":"2025-12-09T11:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.836626 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-qcffq"] Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.837341 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:33 crc kubenswrapper[4849]: E1209 11:27:33.837449 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.854786 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.854875 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.854891 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.854944 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.854966 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:33Z","lastTransitionTime":"2025-12-09T11:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.860239 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.878285 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.891224 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.913649 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"message\\\":\\\"tory.go:160\\\\nI1209 11:27:30.604690 5984 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.604909 5984 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605106 5984 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605492 5984 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 11:27:30.605513 5984 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 11:27:30.605535 5984 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 11:27:30.605560 5984 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 11:27:30.605576 5984 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 11:27:30.605596 5984 factory.go:656] Stopping watch factory\\\\nI1209 11:27:30.605616 5984 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 11:27:30.605631 5984 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 11:27:30.605648 5984 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 11:27:30.605658 5984 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 11:27:30.605665 5984 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:32Z\\\",\\\"message\\\":\\\" openshift-multus/multus-h76bl\\\\nI1209 11:27:32.322288 6182 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1209 11:27:32.322290 6182 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1209 11:27:32.322293 6182 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-h76bl in node crc\\\\nF1209 11:27:32.322295 6182 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.926519 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.929134 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs\") pod \"network-metrics-daemon-qcffq\" (UID: \"fa5f421b-d486-4b0d-a615-7887df025c00\") " pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.929198 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k84jm\" (UniqueName: \"kubernetes.io/projected/fa5f421b-d486-4b0d-a615-7887df025c00-kube-api-access-k84jm\") pod \"network-metrics-daemon-qcffq\" (UID: \"fa5f421b-d486-4b0d-a615-7887df025c00\") " pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.946573 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.957982 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.958014 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.958025 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.958041 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.958051 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:33Z","lastTransitionTime":"2025-12-09T11:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.962088 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.976221 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:33 crc kubenswrapper[4849]: I1209 11:27:33.991713 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:33Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.005477 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.012045 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" event={"ID":"e92bfd32-e3db-4e27-a677-1661aad91e1a","Type":"ContainerStarted","Data":"995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3"} Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.012099 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" event={"ID":"e92bfd32-e3db-4e27-a677-1661aad91e1a","Type":"ContainerStarted","Data":"ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260"} Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.012111 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" event={"ID":"e92bfd32-e3db-4e27-a677-1661aad91e1a","Type":"ContainerStarted","Data":"2cc3124e5eee65096bbe4bc6cfe9997df9630a5fd87842859a4d4b4141cf6363"} Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.013462 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/1.log" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.026680 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.029798 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs\") pod \"network-metrics-daemon-qcffq\" (UID: \"fa5f421b-d486-4b0d-a615-7887df025c00\") " pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.029850 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k84jm\" (UniqueName: \"kubernetes.io/projected/fa5f421b-d486-4b0d-a615-7887df025c00-kube-api-access-k84jm\") pod \"network-metrics-daemon-qcffq\" (UID: \"fa5f421b-d486-4b0d-a615-7887df025c00\") " pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.029958 4849 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.030011 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs podName:fa5f421b-d486-4b0d-a615-7887df025c00 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:34.529996926 +0000 UTC m=+37.069881242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs") pod "network-metrics-daemon-qcffq" (UID: "fa5f421b-d486-4b0d-a615-7887df025c00") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.036893 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.056056 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k84jm\" (UniqueName: \"kubernetes.io/projected/fa5f421b-d486-4b0d-a615-7887df025c00-kube-api-access-k84jm\") pod \"network-metrics-daemon-qcffq\" (UID: \"fa5f421b-d486-4b0d-a615-7887df025c00\") " pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.056321 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.059929 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.059950 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.059958 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.059970 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.059980 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:34Z","lastTransitionTime":"2025-12-09T11:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.069683 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.081944 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.096825 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.113573 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.125877 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.135939 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.144801 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.156852 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.162031 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.162061 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.162073 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.162090 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.162102 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:34Z","lastTransitionTime":"2025-12-09T11:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.167563 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.180116 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.195216 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.205572 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.218969 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.232005 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.249855 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.261702 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.264453 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.264516 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.264530 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.264547 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.264580 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:34Z","lastTransitionTime":"2025-12-09T11:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.274459 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.283745 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.301215 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"message\\\":\\\"tory.go:160\\\\nI1209 11:27:30.604690 5984 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.604909 5984 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605106 5984 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605492 5984 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 11:27:30.605513 5984 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 11:27:30.605535 5984 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 11:27:30.605560 5984 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 11:27:30.605576 5984 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 11:27:30.605596 5984 factory.go:656] Stopping watch factory\\\\nI1209 11:27:30.605616 5984 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 11:27:30.605631 5984 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 11:27:30.605648 5984 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 11:27:30.605658 5984 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 11:27:30.605665 5984 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:32Z\\\",\\\"message\\\":\\\" openshift-multus/multus-h76bl\\\\nI1209 11:27:32.322288 6182 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1209 11:27:32.322290 6182 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1209 11:27:32.322293 6182 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-h76bl in node crc\\\\nF1209 11:27:32.322295 6182 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.316087 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.329534 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:34Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.366586 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.366624 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.366636 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.366650 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.366661 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:34Z","lastTransitionTime":"2025-12-09T11:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.433703 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.433842 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:27:50.433816974 +0000 UTC m=+52.973701290 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.468928 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.468995 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.469015 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.469038 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.469053 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:34Z","lastTransitionTime":"2025-12-09T11:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.534968 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs\") pod \"network-metrics-daemon-qcffq\" (UID: \"fa5f421b-d486-4b0d-a615-7887df025c00\") " pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.535020 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.535060 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.535084 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535104 4849 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535148 4849 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535173 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs podName:fa5f421b-d486-4b0d-a615-7887df025c00 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:35.535154102 +0000 UTC m=+38.075038418 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs") pod "network-metrics-daemon-qcffq" (UID: "fa5f421b-d486-4b0d-a615-7887df025c00") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535191 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:50.535183673 +0000 UTC m=+53.075067989 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.535107 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535246 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535260 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535273 4849 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535296 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:50.535288635 +0000 UTC m=+53.075172951 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535331 4849 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535350 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:50.535344646 +0000 UTC m=+53.075228962 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535387 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535396 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535443 4849 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535468 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:50.535460689 +0000 UTC m=+53.075345005 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.535644 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535742 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.535783 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535834 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.535864 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:34 crc kubenswrapper[4849]: E1209 11:27:34.535915 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.571869 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.572282 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.572387 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.572495 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.572560 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:34Z","lastTransitionTime":"2025-12-09T11:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.675666 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.675725 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.675744 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.675772 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.675791 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:34Z","lastTransitionTime":"2025-12-09T11:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.778192 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.778245 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.778260 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.778288 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.778304 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:34Z","lastTransitionTime":"2025-12-09T11:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.881232 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.881286 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.881296 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.881319 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.881330 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:34Z","lastTransitionTime":"2025-12-09T11:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.985301 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.985834 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.985954 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.986081 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:34 crc kubenswrapper[4849]: I1209 11:27:34.986236 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:34Z","lastTransitionTime":"2025-12-09T11:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.089465 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.090061 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.090151 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.090243 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.090329 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:35Z","lastTransitionTime":"2025-12-09T11:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.193243 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.193285 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.193300 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.193321 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.193347 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:35Z","lastTransitionTime":"2025-12-09T11:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.295844 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.295882 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.295890 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.295905 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.295916 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:35Z","lastTransitionTime":"2025-12-09T11:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.398147 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.398185 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.398198 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.398218 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.398231 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:35Z","lastTransitionTime":"2025-12-09T11:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.500669 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.500705 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.500716 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.500731 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.500742 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:35Z","lastTransitionTime":"2025-12-09T11:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.535804 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:35 crc kubenswrapper[4849]: E1209 11:27:35.536137 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.536501 4849 scope.go:117] "RemoveContainer" containerID="8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.548561 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs\") pod \"network-metrics-daemon-qcffq\" (UID: \"fa5f421b-d486-4b0d-a615-7887df025c00\") " pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:35 crc kubenswrapper[4849]: E1209 11:27:35.548702 4849 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:27:35 crc kubenswrapper[4849]: E1209 11:27:35.548746 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs podName:fa5f421b-d486-4b0d-a615-7887df025c00 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:37.548731561 +0000 UTC m=+40.088615877 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs") pod "network-metrics-daemon-qcffq" (UID: "fa5f421b-d486-4b0d-a615-7887df025c00") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.603783 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.603828 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.603841 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.603857 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.603869 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:35Z","lastTransitionTime":"2025-12-09T11:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.706045 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.706069 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.706077 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.706088 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.706099 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:35Z","lastTransitionTime":"2025-12-09T11:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.808069 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.808123 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.808140 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.808161 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.808177 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:35Z","lastTransitionTime":"2025-12-09T11:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.910919 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.910974 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.910996 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.911021 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:35 crc kubenswrapper[4849]: I1209 11:27:35.911038 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:35Z","lastTransitionTime":"2025-12-09T11:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.013733 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.013800 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.013825 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.013855 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.013880 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:36Z","lastTransitionTime":"2025-12-09T11:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.025097 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.026608 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce"} Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.028100 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.044762 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.060832 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.073702 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.089396 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.105154 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.117232 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.117296 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.117317 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.117353 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.117372 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:36Z","lastTransitionTime":"2025-12-09T11:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.140346 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.157073 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.172673 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.184148 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.201629 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"message\\\":\\\"tory.go:160\\\\nI1209 11:27:30.604690 5984 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.604909 5984 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605106 5984 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605492 5984 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 11:27:30.605513 5984 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 11:27:30.605535 5984 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 11:27:30.605560 5984 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 11:27:30.605576 5984 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 11:27:30.605596 5984 factory.go:656] Stopping watch factory\\\\nI1209 11:27:30.605616 5984 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 11:27:30.605631 5984 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 11:27:30.605648 5984 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 11:27:30.605658 5984 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 11:27:30.605665 5984 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:32Z\\\",\\\"message\\\":\\\" openshift-multus/multus-h76bl\\\\nI1209 11:27:32.322288 6182 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1209 11:27:32.322290 6182 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1209 11:27:32.322293 6182 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-h76bl in node crc\\\\nF1209 11:27:32.322295 6182 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.216064 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.220145 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.220184 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.220193 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.220207 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.220216 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:36Z","lastTransitionTime":"2025-12-09T11:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.227681 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.239089 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.250023 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.262286 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.272705 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.287067 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:36Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.329242 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.329296 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.329323 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.329344 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.329359 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:36Z","lastTransitionTime":"2025-12-09T11:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.432755 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.432804 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.432814 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.432831 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.432842 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:36Z","lastTransitionTime":"2025-12-09T11:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.535456 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.535521 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.535540 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.535477 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.535571 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:36 crc kubenswrapper[4849]: E1209 11:27:36.535598 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.535618 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.535641 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.535658 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:36Z","lastTransitionTime":"2025-12-09T11:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:36 crc kubenswrapper[4849]: E1209 11:27:36.535704 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:36 crc kubenswrapper[4849]: E1209 11:27:36.535811 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.637807 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.638113 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.638244 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.638362 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.638516 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:36Z","lastTransitionTime":"2025-12-09T11:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.741397 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.741502 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.741518 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.741570 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.741590 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:36Z","lastTransitionTime":"2025-12-09T11:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.845182 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.845251 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.845263 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.845285 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.845298 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:36Z","lastTransitionTime":"2025-12-09T11:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.948226 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.948284 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.948297 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.948318 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:36 crc kubenswrapper[4849]: I1209 11:27:36.948331 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:36Z","lastTransitionTime":"2025-12-09T11:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.050954 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.051244 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.051312 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.051377 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.051471 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:37Z","lastTransitionTime":"2025-12-09T11:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.154780 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.155502 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.155555 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.155580 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.155595 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:37Z","lastTransitionTime":"2025-12-09T11:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.258813 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.258887 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.258902 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.258926 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.258939 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:37Z","lastTransitionTime":"2025-12-09T11:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.362587 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.363183 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.363231 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.363263 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.363282 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:37Z","lastTransitionTime":"2025-12-09T11:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.466297 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.466353 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.466366 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.466382 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.466393 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:37Z","lastTransitionTime":"2025-12-09T11:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.535406 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:37 crc kubenswrapper[4849]: E1209 11:27:37.535573 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.567682 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs\") pod \"network-metrics-daemon-qcffq\" (UID: \"fa5f421b-d486-4b0d-a615-7887df025c00\") " pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:37 crc kubenswrapper[4849]: E1209 11:27:37.567872 4849 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:27:37 crc kubenswrapper[4849]: E1209 11:27:37.567958 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs podName:fa5f421b-d486-4b0d-a615-7887df025c00 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:41.5679371 +0000 UTC m=+44.107821416 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs") pod "network-metrics-daemon-qcffq" (UID: "fa5f421b-d486-4b0d-a615-7887df025c00") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.568939 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.569002 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.569021 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.569048 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.569064 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:37Z","lastTransitionTime":"2025-12-09T11:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.671912 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.671973 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.671984 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.672004 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.672015 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:37Z","lastTransitionTime":"2025-12-09T11:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.775264 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.775342 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.775365 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.775405 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.775477 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:37Z","lastTransitionTime":"2025-12-09T11:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.878434 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.878491 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.878507 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.878532 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.878549 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:37Z","lastTransitionTime":"2025-12-09T11:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.981610 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.981692 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.981728 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.981759 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:37 crc kubenswrapper[4849]: I1209 11:27:37.981840 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:37Z","lastTransitionTime":"2025-12-09T11:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.084141 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.084170 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.084180 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.084194 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.084205 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:38Z","lastTransitionTime":"2025-12-09T11:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.186323 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.186455 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.186483 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.186514 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.186536 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:38Z","lastTransitionTime":"2025-12-09T11:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.289315 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.289362 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.289395 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.289435 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.289453 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:38Z","lastTransitionTime":"2025-12-09T11:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.393351 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.393471 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.393496 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.393529 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.393554 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:38Z","lastTransitionTime":"2025-12-09T11:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.495775 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.495809 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.495817 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.495832 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.495841 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:38Z","lastTransitionTime":"2025-12-09T11:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.536699 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.536740 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.537054 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:38 crc kubenswrapper[4849]: E1209 11:27:38.537604 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:38 crc kubenswrapper[4849]: E1209 11:27:38.538193 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:38 crc kubenswrapper[4849]: E1209 11:27:38.538266 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.550847 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.565295 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.581184 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.595899 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.597652 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.597671 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.597695 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.597708 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.597717 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:38Z","lastTransitionTime":"2025-12-09T11:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.609109 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.629989 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.643240 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.656560 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.660087 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.660112 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.660136 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.660150 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.660159 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:38Z","lastTransitionTime":"2025-12-09T11:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.671133 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: E1209 11:27:38.680158 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.683993 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.684037 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.684048 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.684064 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.684075 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:38Z","lastTransitionTime":"2025-12-09T11:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.693313 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03eca2fa2a7401053e4d1bcded5a430b2e706f8d12fc15f66aa3263bd1500056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"message\\\":\\\"tory.go:160\\\\nI1209 11:27:30.604690 5984 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.604909 5984 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605106 5984 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:30.605492 5984 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 11:27:30.605513 5984 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 11:27:30.605535 5984 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 11:27:30.605560 5984 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 11:27:30.605576 5984 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 11:27:30.605596 5984 factory.go:656] Stopping watch factory\\\\nI1209 11:27:30.605616 5984 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 11:27:30.605631 5984 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 11:27:30.605648 5984 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 11:27:30.605658 5984 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 11:27:30.605665 5984 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:32Z\\\",\\\"message\\\":\\\" openshift-multus/multus-h76bl\\\\nI1209 11:27:32.322288 6182 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1209 11:27:32.322290 6182 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1209 11:27:32.322293 6182 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-h76bl in node crc\\\\nF1209 11:27:32.322295 6182 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: E1209 11:27:38.697996 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.702223 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.702276 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.702293 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.702321 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.702336 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:38Z","lastTransitionTime":"2025-12-09T11:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.709303 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: E1209 11:27:38.714813 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.718455 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.718490 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.718502 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.718522 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.718536 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:38Z","lastTransitionTime":"2025-12-09T11:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.723282 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: E1209 11:27:38.736237 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.739994 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.742557 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.742644 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.742659 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.742678 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.742725 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:38Z","lastTransitionTime":"2025-12-09T11:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.754206 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: E1209 11:27:38.754737 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: E1209 11:27:38.754955 4849 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.756621 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.756679 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.756692 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.756710 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.756722 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:38Z","lastTransitionTime":"2025-12-09T11:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.766730 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.777264 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.792485 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:38Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.858920 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.858948 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.858957 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.858970 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.858979 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:38Z","lastTransitionTime":"2025-12-09T11:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.961971 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.962014 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.962026 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.962053 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:38 crc kubenswrapper[4849]: I1209 11:27:38.962071 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:38Z","lastTransitionTime":"2025-12-09T11:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.065272 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.065343 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.065361 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.065386 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.065403 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:39Z","lastTransitionTime":"2025-12-09T11:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.168203 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.168273 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.168287 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.168306 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.168319 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:39Z","lastTransitionTime":"2025-12-09T11:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.271209 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.271249 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.271257 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.271274 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.271283 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:39Z","lastTransitionTime":"2025-12-09T11:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.373728 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.373793 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.373803 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.373817 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.373826 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:39Z","lastTransitionTime":"2025-12-09T11:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.476212 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.476284 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.476297 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.476315 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.476327 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:39Z","lastTransitionTime":"2025-12-09T11:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.536479 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:39 crc kubenswrapper[4849]: E1209 11:27:39.536724 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.579249 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.579295 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.579306 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.579322 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.579333 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:39Z","lastTransitionTime":"2025-12-09T11:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.682117 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.682201 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.682220 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.682245 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.682270 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:39Z","lastTransitionTime":"2025-12-09T11:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.785620 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.785669 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.785683 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.785701 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.785713 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:39Z","lastTransitionTime":"2025-12-09T11:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.893372 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.893453 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.893467 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.893486 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.893498 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:39Z","lastTransitionTime":"2025-12-09T11:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.995341 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.995377 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.995387 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.995402 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:39 crc kubenswrapper[4849]: I1209 11:27:39.995441 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:39Z","lastTransitionTime":"2025-12-09T11:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.097564 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.097625 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.097638 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.097659 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.097672 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:40Z","lastTransitionTime":"2025-12-09T11:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.200522 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.200563 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.200573 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.200590 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.200599 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:40Z","lastTransitionTime":"2025-12-09T11:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.303376 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.303449 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.303464 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.303486 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.303501 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:40Z","lastTransitionTime":"2025-12-09T11:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.406624 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.406689 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.406697 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.406714 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.406725 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:40Z","lastTransitionTime":"2025-12-09T11:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.509440 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.509505 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.509522 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.509546 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.509562 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:40Z","lastTransitionTime":"2025-12-09T11:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.535786 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.535839 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:40 crc kubenswrapper[4849]: E1209 11:27:40.535987 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.536060 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:40 crc kubenswrapper[4849]: E1209 11:27:40.536275 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:40 crc kubenswrapper[4849]: E1209 11:27:40.536067 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.612259 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.612294 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.612303 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.612317 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.612325 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:40Z","lastTransitionTime":"2025-12-09T11:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.714222 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.714270 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.714284 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.714301 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.714312 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:40Z","lastTransitionTime":"2025-12-09T11:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.816624 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.816660 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.816672 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.816689 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.816701 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:40Z","lastTransitionTime":"2025-12-09T11:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.919128 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.919240 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.919254 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.919272 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:40 crc kubenswrapper[4849]: I1209 11:27:40.919285 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:40Z","lastTransitionTime":"2025-12-09T11:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.021518 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.021577 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.021592 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.021610 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.021622 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:41Z","lastTransitionTime":"2025-12-09T11:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.124755 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.124791 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.124800 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.124814 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.124822 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:41Z","lastTransitionTime":"2025-12-09T11:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.226927 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.226953 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.226962 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.226975 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.226986 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:41Z","lastTransitionTime":"2025-12-09T11:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.329372 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.329938 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.330011 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.330085 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.330161 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:41Z","lastTransitionTime":"2025-12-09T11:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.433533 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.433807 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.433930 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.434011 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.434092 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:41Z","lastTransitionTime":"2025-12-09T11:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.535688 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:41 crc kubenswrapper[4849]: E1209 11:27:41.536309 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.537776 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.537806 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.537815 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.537829 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.537839 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:41Z","lastTransitionTime":"2025-12-09T11:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.610126 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs\") pod \"network-metrics-daemon-qcffq\" (UID: \"fa5f421b-d486-4b0d-a615-7887df025c00\") " pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:41 crc kubenswrapper[4849]: E1209 11:27:41.610349 4849 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:27:41 crc kubenswrapper[4849]: E1209 11:27:41.610477 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs podName:fa5f421b-d486-4b0d-a615-7887df025c00 nodeName:}" failed. No retries permitted until 2025-12-09 11:27:49.610458858 +0000 UTC m=+52.150343174 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs") pod "network-metrics-daemon-qcffq" (UID: "fa5f421b-d486-4b0d-a615-7887df025c00") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.643497 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.643797 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.643893 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.643994 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.644094 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:41Z","lastTransitionTime":"2025-12-09T11:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.746812 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.747193 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.747538 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.747761 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.747950 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:41Z","lastTransitionTime":"2025-12-09T11:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.851012 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.851076 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.851089 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.851110 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.851128 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:41Z","lastTransitionTime":"2025-12-09T11:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.953540 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.953818 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.953925 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.954043 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:41 crc kubenswrapper[4849]: I1209 11:27:41.954154 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:41Z","lastTransitionTime":"2025-12-09T11:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.056803 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.056843 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.056854 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.056869 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.056880 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:42Z","lastTransitionTime":"2025-12-09T11:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.159472 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.159763 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.159996 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.160212 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.160439 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:42Z","lastTransitionTime":"2025-12-09T11:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.263338 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.263379 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.263392 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.263430 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.263443 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:42Z","lastTransitionTime":"2025-12-09T11:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.365685 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.365720 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.365729 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.365743 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.365753 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:42Z","lastTransitionTime":"2025-12-09T11:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.468342 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.468639 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.468729 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.468815 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.468895 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:42Z","lastTransitionTime":"2025-12-09T11:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.571314 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.571594 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.571685 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.571770 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.571844 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:42Z","lastTransitionTime":"2025-12-09T11:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.588024 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.592133 4849 scope.go:117] "RemoveContainer" containerID="ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e" Dec 09 11:27:42 crc kubenswrapper[4849]: E1209 11:27:42.592335 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6hf97_openshift-ovn-kubernetes(205e41c5-82b8-4bac-a27a-49f1e0da94e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.593256 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.593510 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:42 crc kubenswrapper[4849]: E1209 11:27:42.593509 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.593693 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:42 crc kubenswrapper[4849]: E1209 11:27:42.593840 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:42 crc kubenswrapper[4849]: E1209 11:27:42.593869 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.608951 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.634900 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:32Z\\\",\\\"message\\\":\\\" openshift-multus/multus-h76bl\\\\nI1209 11:27:32.322288 6182 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1209 11:27:32.322290 6182 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1209 11:27:32.322293 6182 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-h76bl in node crc\\\\nF1209 11:27:32.322295 6182 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6hf97_openshift-ovn-kubernetes(205e41c5-82b8-4bac-a27a-49f1e0da94e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.652344 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.670316 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.674646 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.674707 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.674721 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.674737 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.674749 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:42Z","lastTransitionTime":"2025-12-09T11:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.687504 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.706632 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.719944 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.737008 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.755604 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.775762 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.777166 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.777202 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.777232 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.777251 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.777261 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:42Z","lastTransitionTime":"2025-12-09T11:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.792573 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.805318 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.820667 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.833076 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.854177 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.870840 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.879476 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.879511 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.879522 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.879536 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.879546 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:42Z","lastTransitionTime":"2025-12-09T11:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.887984 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:42Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.981713 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.981748 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.981757 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.981772 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:42 crc kubenswrapper[4849]: I1209 11:27:42.981781 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:42Z","lastTransitionTime":"2025-12-09T11:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.083790 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.083825 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.083835 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.083850 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.083861 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:43Z","lastTransitionTime":"2025-12-09T11:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.186360 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.186404 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.186440 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.186461 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.186474 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:43Z","lastTransitionTime":"2025-12-09T11:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.288762 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.288816 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.288828 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.288860 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.288871 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:43Z","lastTransitionTime":"2025-12-09T11:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.391545 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.392108 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.392196 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.392294 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.392374 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:43Z","lastTransitionTime":"2025-12-09T11:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.495230 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.495256 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.495264 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.495276 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.495286 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:43Z","lastTransitionTime":"2025-12-09T11:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.535565 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:43 crc kubenswrapper[4849]: E1209 11:27:43.535714 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.598064 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.598123 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.598139 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.598154 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.598165 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:43Z","lastTransitionTime":"2025-12-09T11:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.700755 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.700806 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.700816 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.700832 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.700843 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:43Z","lastTransitionTime":"2025-12-09T11:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.802948 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.802987 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.802999 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.803015 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.803025 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:43Z","lastTransitionTime":"2025-12-09T11:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.905295 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.905344 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.905359 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.905379 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:43 crc kubenswrapper[4849]: I1209 11:27:43.905393 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:43Z","lastTransitionTime":"2025-12-09T11:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.008134 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.008169 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.008182 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.008201 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.008215 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:44Z","lastTransitionTime":"2025-12-09T11:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.110614 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.110658 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.110669 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.110685 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.110707 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:44Z","lastTransitionTime":"2025-12-09T11:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.213148 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.213487 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.213570 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.213656 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.213738 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:44Z","lastTransitionTime":"2025-12-09T11:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.316039 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.316078 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.316088 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.316104 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.316115 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:44Z","lastTransitionTime":"2025-12-09T11:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.419013 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.419062 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.419075 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.419094 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.419107 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:44Z","lastTransitionTime":"2025-12-09T11:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.522795 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.523227 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.523480 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.523697 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.523906 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:44Z","lastTransitionTime":"2025-12-09T11:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.536100 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:44 crc kubenswrapper[4849]: E1209 11:27:44.536260 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.536647 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:44 crc kubenswrapper[4849]: E1209 11:27:44.536723 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.536744 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:44 crc kubenswrapper[4849]: E1209 11:27:44.536795 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.627536 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.627614 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.627642 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.627675 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.627696 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:44Z","lastTransitionTime":"2025-12-09T11:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.730277 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.730357 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.730394 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.730463 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.730499 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:44Z","lastTransitionTime":"2025-12-09T11:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.833199 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.833250 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.833267 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.833288 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.833303 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:44Z","lastTransitionTime":"2025-12-09T11:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.935790 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.935822 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.935831 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.935846 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:44 crc kubenswrapper[4849]: I1209 11:27:44.935856 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:44Z","lastTransitionTime":"2025-12-09T11:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.038594 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.038859 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.039002 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.039185 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.039299 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:45Z","lastTransitionTime":"2025-12-09T11:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.141930 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.142117 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.142191 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.142272 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.142331 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:45Z","lastTransitionTime":"2025-12-09T11:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.244445 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.244481 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.244492 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.244510 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.244521 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:45Z","lastTransitionTime":"2025-12-09T11:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.346483 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.346548 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.346565 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.346579 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.346589 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:45Z","lastTransitionTime":"2025-12-09T11:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.449329 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.449390 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.449461 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.449494 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.449517 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:45Z","lastTransitionTime":"2025-12-09T11:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.535971 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:45 crc kubenswrapper[4849]: E1209 11:27:45.536181 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.552140 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.552181 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.552193 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.552209 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.552221 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:45Z","lastTransitionTime":"2025-12-09T11:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.655113 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.655142 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.655152 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.655169 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.655185 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:45Z","lastTransitionTime":"2025-12-09T11:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.758702 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.759138 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.759553 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.759882 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.760221 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:45Z","lastTransitionTime":"2025-12-09T11:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.863062 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.863133 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.863157 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.863187 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.863211 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:45Z","lastTransitionTime":"2025-12-09T11:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.967242 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.967308 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.967329 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.967363 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:45 crc kubenswrapper[4849]: I1209 11:27:45.967387 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:45Z","lastTransitionTime":"2025-12-09T11:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.070241 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.070297 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.070315 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.070339 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.070356 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:46Z","lastTransitionTime":"2025-12-09T11:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.173319 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.173346 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.173354 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.173367 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.173375 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:46Z","lastTransitionTime":"2025-12-09T11:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.275735 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.275761 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.275769 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.275782 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.275790 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:46Z","lastTransitionTime":"2025-12-09T11:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.378527 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.378574 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.378582 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.378598 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.378610 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:46Z","lastTransitionTime":"2025-12-09T11:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.481644 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.482163 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.482234 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.482305 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.482379 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:46Z","lastTransitionTime":"2025-12-09T11:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.536051 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.536069 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:46 crc kubenswrapper[4849]: E1209 11:27:46.536458 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.536088 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:46 crc kubenswrapper[4849]: E1209 11:27:46.536789 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:46 crc kubenswrapper[4849]: E1209 11:27:46.537150 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.584965 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.585031 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.585051 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.585074 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.585088 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:46Z","lastTransitionTime":"2025-12-09T11:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.688518 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.688761 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.688861 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.688960 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.689065 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:46Z","lastTransitionTime":"2025-12-09T11:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.791919 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.791961 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.791970 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.791985 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.791998 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:46Z","lastTransitionTime":"2025-12-09T11:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.893820 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.893872 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.893905 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.893927 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.893941 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:46Z","lastTransitionTime":"2025-12-09T11:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.997166 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.997231 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.997249 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.997273 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:46 crc kubenswrapper[4849]: I1209 11:27:46.997291 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:46Z","lastTransitionTime":"2025-12-09T11:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.100097 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.100163 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.100184 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.100212 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.100231 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:47Z","lastTransitionTime":"2025-12-09T11:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.206238 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.206284 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.206292 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.206307 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.206318 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:47Z","lastTransitionTime":"2025-12-09T11:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.309055 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.309090 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.309101 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.309116 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.309127 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:47Z","lastTransitionTime":"2025-12-09T11:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.411477 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.411781 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.411932 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.412077 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.412201 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:47Z","lastTransitionTime":"2025-12-09T11:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.514989 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.515050 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.515062 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.515077 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.515087 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:47Z","lastTransitionTime":"2025-12-09T11:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.535871 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:47 crc kubenswrapper[4849]: E1209 11:27:47.536006 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.617579 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.617614 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.617622 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.617637 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.617645 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:47Z","lastTransitionTime":"2025-12-09T11:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.720320 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.720367 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.720376 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.720388 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.720397 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:47Z","lastTransitionTime":"2025-12-09T11:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.824206 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.824252 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.824263 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.824280 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.824292 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:47Z","lastTransitionTime":"2025-12-09T11:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.927179 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.927455 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.927668 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.927765 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:47 crc kubenswrapper[4849]: I1209 11:27:47.927843 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:47Z","lastTransitionTime":"2025-12-09T11:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.030337 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.030590 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.030713 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.030839 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.030943 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:48Z","lastTransitionTime":"2025-12-09T11:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.133362 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.133694 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.133896 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.134027 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.134154 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:48Z","lastTransitionTime":"2025-12-09T11:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.238339 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.238905 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.239095 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.239230 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.239345 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:48Z","lastTransitionTime":"2025-12-09T11:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.341906 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.342391 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.342524 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.342633 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.342741 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:48Z","lastTransitionTime":"2025-12-09T11:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.445908 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.446371 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.446501 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.446595 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.446710 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:48Z","lastTransitionTime":"2025-12-09T11:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.536595 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.536636 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.536601 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:48 crc kubenswrapper[4849]: E1209 11:27:48.536744 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:48 crc kubenswrapper[4849]: E1209 11:27:48.536912 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:48 crc kubenswrapper[4849]: E1209 11:27:48.536999 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.549148 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.549537 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.549632 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.549726 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.549815 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:48Z","lastTransitionTime":"2025-12-09T11:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.558291 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.574031 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.588165 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.610510 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.626554 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.643775 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.653240 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.653293 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.653301 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.653321 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.653331 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:48Z","lastTransitionTime":"2025-12-09T11:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.662638 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.679519 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.700065 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.718620 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.732710 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.756263 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.756324 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.756371 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.756394 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.756440 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:48Z","lastTransitionTime":"2025-12-09T11:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.757554 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:32Z\\\",\\\"message\\\":\\\" openshift-multus/multus-h76bl\\\\nI1209 11:27:32.322288 6182 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1209 11:27:32.322290 6182 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1209 11:27:32.322293 6182 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-h76bl in node crc\\\\nF1209 11:27:32.322295 6182 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6hf97_openshift-ovn-kubernetes(205e41c5-82b8-4bac-a27a-49f1e0da94e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.772982 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.788315 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.806570 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.821890 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.835829 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.860343 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.860445 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.860459 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.860477 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.860489 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:48Z","lastTransitionTime":"2025-12-09T11:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.866834 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.867205 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.867324 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.867477 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.867579 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:48Z","lastTransitionTime":"2025-12-09T11:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:48 crc kubenswrapper[4849]: E1209 11:27:48.885779 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.889893 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.889947 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.889958 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.889976 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.889987 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:48Z","lastTransitionTime":"2025-12-09T11:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:48 crc kubenswrapper[4849]: E1209 11:27:48.901012 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.904113 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.904141 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.904151 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.904166 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.904177 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:48Z","lastTransitionTime":"2025-12-09T11:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:48 crc kubenswrapper[4849]: E1209 11:27:48.920092 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.924983 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.925156 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.925241 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.925318 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.925406 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:48Z","lastTransitionTime":"2025-12-09T11:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:48 crc kubenswrapper[4849]: E1209 11:27:48.939730 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.944826 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.944874 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.944885 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.944905 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.944917 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:48Z","lastTransitionTime":"2025-12-09T11:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:48 crc kubenswrapper[4849]: E1209 11:27:48.956269 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:48Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:48 crc kubenswrapper[4849]: E1209 11:27:48.956384 4849 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.963145 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.963188 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.963199 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.963214 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:48 crc kubenswrapper[4849]: I1209 11:27:48.963224 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:48Z","lastTransitionTime":"2025-12-09T11:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.065710 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.065775 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.065793 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.065816 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.065832 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:49Z","lastTransitionTime":"2025-12-09T11:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.169099 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.169160 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.169172 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.169192 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.169206 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:49Z","lastTransitionTime":"2025-12-09T11:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.272678 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.272740 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.272758 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.272782 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.272800 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:49Z","lastTransitionTime":"2025-12-09T11:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.376557 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.376610 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.376624 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.376649 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.376659 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:49Z","lastTransitionTime":"2025-12-09T11:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.484817 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.484898 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.484909 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.484926 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.484941 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:49Z","lastTransitionTime":"2025-12-09T11:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.535762 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:49 crc kubenswrapper[4849]: E1209 11:27:49.535916 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.588034 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.588081 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.588098 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.588121 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.588139 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:49Z","lastTransitionTime":"2025-12-09T11:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.652370 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs\") pod \"network-metrics-daemon-qcffq\" (UID: \"fa5f421b-d486-4b0d-a615-7887df025c00\") " pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:49 crc kubenswrapper[4849]: E1209 11:27:49.652698 4849 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:27:49 crc kubenswrapper[4849]: E1209 11:27:49.652829 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs podName:fa5f421b-d486-4b0d-a615-7887df025c00 nodeName:}" failed. No retries permitted until 2025-12-09 11:28:05.652801876 +0000 UTC m=+68.192686192 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs") pod "network-metrics-daemon-qcffq" (UID: "fa5f421b-d486-4b0d-a615-7887df025c00") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.691043 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.691086 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.691098 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.691117 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.691129 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:49Z","lastTransitionTime":"2025-12-09T11:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.793577 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.793652 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.793671 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.793700 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.793723 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:49Z","lastTransitionTime":"2025-12-09T11:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.896918 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.896990 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.897001 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.897023 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:49 crc kubenswrapper[4849]: I1209 11:27:49.897037 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:49Z","lastTransitionTime":"2025-12-09T11:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:49.999865 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:49.999900 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:49.999907 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:49.999922 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:49.999933 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:49Z","lastTransitionTime":"2025-12-09T11:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.102377 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.102468 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.102479 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.102504 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.102518 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:50Z","lastTransitionTime":"2025-12-09T11:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.205188 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.205216 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.205223 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.205236 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.205244 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:50Z","lastTransitionTime":"2025-12-09T11:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.307441 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.307471 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.307481 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.307497 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.307507 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:50Z","lastTransitionTime":"2025-12-09T11:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.409904 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.409945 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.409954 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.409969 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.409978 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:50Z","lastTransitionTime":"2025-12-09T11:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.459399 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.459613 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:28:22.459595002 +0000 UTC m=+84.999479318 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.512539 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.512588 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.512600 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.512617 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.512629 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:50Z","lastTransitionTime":"2025-12-09T11:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.536276 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.536396 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.536516 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.536543 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.536721 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.536895 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.560183 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.560535 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.560658 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.560769 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.560361 4849 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.560696 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.561057 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.561075 4849 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.560784 4849 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.561053 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:28:22.561014772 +0000 UTC m=+85.100899128 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.561160 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 11:28:22.561145325 +0000 UTC m=+85.101029641 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.561175 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:28:22.561167366 +0000 UTC m=+85.101051792 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.560880 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.561193 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.561202 4849 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:50 crc kubenswrapper[4849]: E1209 11:27:50.561230 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 11:28:22.561224567 +0000 UTC m=+85.101108883 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.615405 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.615451 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.615459 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.615472 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.615480 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:50Z","lastTransitionTime":"2025-12-09T11:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.717814 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.717883 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.717896 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.717912 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.717925 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:50Z","lastTransitionTime":"2025-12-09T11:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.820619 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.820692 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.820706 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.820729 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.820744 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:50Z","lastTransitionTime":"2025-12-09T11:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.923603 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.923684 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.923698 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.923716 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:50 crc kubenswrapper[4849]: I1209 11:27:50.923730 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:50Z","lastTransitionTime":"2025-12-09T11:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.026762 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.026840 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.026859 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.026887 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.026908 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:51Z","lastTransitionTime":"2025-12-09T11:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.129932 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.129989 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.130007 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.130033 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.130051 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:51Z","lastTransitionTime":"2025-12-09T11:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.232540 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.232590 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.232604 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.232628 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.232641 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:51Z","lastTransitionTime":"2025-12-09T11:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.335312 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.335381 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.335405 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.335468 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.335491 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:51Z","lastTransitionTime":"2025-12-09T11:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.438105 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.438143 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.438158 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.438176 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.438185 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:51Z","lastTransitionTime":"2025-12-09T11:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.536584 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:51 crc kubenswrapper[4849]: E1209 11:27:51.536780 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.540266 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.540300 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.540311 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.540325 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.540336 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:51Z","lastTransitionTime":"2025-12-09T11:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.642859 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.642896 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.642904 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.642918 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.642927 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:51Z","lastTransitionTime":"2025-12-09T11:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.745978 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.746015 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.746024 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.746038 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.746047 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:51Z","lastTransitionTime":"2025-12-09T11:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.848918 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.849152 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.849286 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.849367 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.849478 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:51Z","lastTransitionTime":"2025-12-09T11:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.952526 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.952575 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.952586 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.952602 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:51 crc kubenswrapper[4849]: I1209 11:27:51.952615 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:51Z","lastTransitionTime":"2025-12-09T11:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.055227 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.055577 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.055660 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.055748 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.055827 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:52Z","lastTransitionTime":"2025-12-09T11:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.158804 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.158852 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.158862 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.158880 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.158893 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:52Z","lastTransitionTime":"2025-12-09T11:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.260807 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.260850 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.260861 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.260877 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.260890 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:52Z","lastTransitionTime":"2025-12-09T11:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.362900 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.362954 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.362965 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.362980 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.362991 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:52Z","lastTransitionTime":"2025-12-09T11:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.464822 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.464870 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.464890 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.464912 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.464929 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:52Z","lastTransitionTime":"2025-12-09T11:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.535954 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.536099 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:52 crc kubenswrapper[4849]: E1209 11:27:52.536792 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.536114 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:52 crc kubenswrapper[4849]: E1209 11:27:52.537022 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:52 crc kubenswrapper[4849]: E1209 11:27:52.536275 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.566903 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.566955 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.566965 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.566981 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.566991 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:52Z","lastTransitionTime":"2025-12-09T11:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.669359 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.669399 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.669426 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.669471 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.669493 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:52Z","lastTransitionTime":"2025-12-09T11:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.771965 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.772007 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.772018 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.772034 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.772046 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:52Z","lastTransitionTime":"2025-12-09T11:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.875241 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.875283 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.875294 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.875309 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.875319 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:52Z","lastTransitionTime":"2025-12-09T11:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.977805 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.977847 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.977858 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.977875 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:52 crc kubenswrapper[4849]: I1209 11:27:52.977886 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:52Z","lastTransitionTime":"2025-12-09T11:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.077596 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.080220 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.080254 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.080266 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.080281 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.080290 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:53Z","lastTransitionTime":"2025-12-09T11:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.101618 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.113688 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.126379 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.138519 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.148457 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.160560 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.175957 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.182196 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.182242 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.182255 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.182274 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.182286 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:53Z","lastTransitionTime":"2025-12-09T11:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.188281 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.206617 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:32Z\\\",\\\"message\\\":\\\" openshift-multus/multus-h76bl\\\\nI1209 11:27:32.322288 6182 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1209 11:27:32.322290 6182 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1209 11:27:32.322293 6182 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-h76bl in node crc\\\\nF1209 11:27:32.322295 6182 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6hf97_openshift-ovn-kubernetes(205e41c5-82b8-4bac-a27a-49f1e0da94e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.216426 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.232268 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.245011 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.263447 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.276021 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.284493 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.284529 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.284537 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.284551 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.284560 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:53Z","lastTransitionTime":"2025-12-09T11:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.289933 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.303465 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.315010 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:53Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.387325 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.387375 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.387384 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.387401 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.387433 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:53Z","lastTransitionTime":"2025-12-09T11:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.490578 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.490632 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.490649 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.490666 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.490679 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:53Z","lastTransitionTime":"2025-12-09T11:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.535432 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:53 crc kubenswrapper[4849]: E1209 11:27:53.535804 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.592480 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.592769 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.592944 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.593229 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.593317 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:53Z","lastTransitionTime":"2025-12-09T11:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.696201 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.696249 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.696263 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.696283 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.696299 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:53Z","lastTransitionTime":"2025-12-09T11:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.798484 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.798795 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.798933 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.799076 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.799204 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:53Z","lastTransitionTime":"2025-12-09T11:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.903056 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.903098 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.903115 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.903136 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:53 crc kubenswrapper[4849]: I1209 11:27:53.903153 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:53Z","lastTransitionTime":"2025-12-09T11:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.006624 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.006987 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.007185 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.007408 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.007635 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:54Z","lastTransitionTime":"2025-12-09T11:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.109791 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.109834 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.109844 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.109859 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.109869 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:54Z","lastTransitionTime":"2025-12-09T11:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.212978 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.213037 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.213049 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.213070 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.213084 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:54Z","lastTransitionTime":"2025-12-09T11:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.315400 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.315631 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.315687 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.315774 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.315835 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:54Z","lastTransitionTime":"2025-12-09T11:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.417675 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.417971 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.418081 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.418184 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.418462 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:54Z","lastTransitionTime":"2025-12-09T11:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.522449 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.522491 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.522500 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.522520 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.522531 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:54Z","lastTransitionTime":"2025-12-09T11:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.536521 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:54 crc kubenswrapper[4849]: E1209 11:27:54.536741 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.536946 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.536990 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:54 crc kubenswrapper[4849]: E1209 11:27:54.537140 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:54 crc kubenswrapper[4849]: E1209 11:27:54.537243 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.537983 4849 scope.go:117] "RemoveContainer" containerID="ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.626024 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.626531 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.626569 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.626599 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.626611 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:54Z","lastTransitionTime":"2025-12-09T11:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.729973 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.730037 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.730050 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.730074 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.730088 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:54Z","lastTransitionTime":"2025-12-09T11:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.834470 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.834732 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.835049 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.835197 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.835259 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:54Z","lastTransitionTime":"2025-12-09T11:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.917222 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.931084 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.937906 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.937945 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.937957 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.937974 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.937986 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:54Z","lastTransitionTime":"2025-12-09T11:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.940696 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:54Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.953447 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:54Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.965542 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:54Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.982905 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:54Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:54 crc kubenswrapper[4849]: I1209 11:27:54.995731 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:54Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.008937 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.017851 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.032618 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.040884 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.040924 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.040933 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.040949 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.040959 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:55Z","lastTransitionTime":"2025-12-09T11:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.045705 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.058838 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.073058 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.089014 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.095663 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/1.log" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.098363 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerStarted","Data":"6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28"} Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.099036 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.124096 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.143717 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.143751 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.143760 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.143774 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.143785 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:55Z","lastTransitionTime":"2025-12-09T11:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.145354 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.158237 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.170099 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.190574 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:32Z\\\",\\\"message\\\":\\\" openshift-multus/multus-h76bl\\\\nI1209 11:27:32.322288 6182 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1209 11:27:32.322290 6182 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1209 11:27:32.322293 6182 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-h76bl in node crc\\\\nF1209 11:27:32.322295 6182 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6hf97_openshift-ovn-kubernetes(205e41c5-82b8-4bac-a27a-49f1e0da94e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.203529 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.216760 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.236330 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.246555 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.246778 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.246836 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.246897 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.246981 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:55Z","lastTransitionTime":"2025-12-09T11:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.257848 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:32Z\\\",\\\"message\\\":\\\" openshift-multus/multus-h76bl\\\\nI1209 11:27:32.322288 6182 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1209 11:27:32.322290 6182 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1209 11:27:32.322293 6182 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-h76bl in node crc\\\\nF1209 11:27:32.322295 6182 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.275139 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd5aaf6a-290c-4907-9138-e72fb2d70d47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63eb80d6ef78c44cac4d693ead4c3ba27c4a52a859347f8a1880d460aa03a7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://912e2384686e0ec62b9fa35a44eac781a123ce25d7966176317b63aef74dd153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d9ddc776af8966326e6ee92251b4a127247af456fabe67cf9c86a6cc2d4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.298492 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.311668 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.326160 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.343693 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.352681 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.352716 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.352726 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.352742 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.352773 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:55Z","lastTransitionTime":"2025-12-09T11:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.362297 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.376729 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.393140 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.405883 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.429324 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.442610 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.454835 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.454877 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.454887 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.454903 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.454912 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:55Z","lastTransitionTime":"2025-12-09T11:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.456480 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.469011 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.490995 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:55Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.536083 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:55 crc kubenswrapper[4849]: E1209 11:27:55.536239 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.557523 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.557547 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.557555 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.557588 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.557597 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:55Z","lastTransitionTime":"2025-12-09T11:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.671037 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.671080 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.671089 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.671104 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.671116 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:55Z","lastTransitionTime":"2025-12-09T11:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.773748 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.773790 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.773800 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.773816 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.773826 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:55Z","lastTransitionTime":"2025-12-09T11:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.875921 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.875989 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.876011 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.876039 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.876060 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:55Z","lastTransitionTime":"2025-12-09T11:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.978595 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.978844 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.978907 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.978972 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:55 crc kubenswrapper[4849]: I1209 11:27:55.979038 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:55Z","lastTransitionTime":"2025-12-09T11:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.081310 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.081343 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.081353 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.081366 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.081374 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:56Z","lastTransitionTime":"2025-12-09T11:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.184512 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.184571 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.184582 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.184602 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.184615 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:56Z","lastTransitionTime":"2025-12-09T11:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.286966 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.287026 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.287043 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.287069 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.287087 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:56Z","lastTransitionTime":"2025-12-09T11:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.390041 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.390094 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.390112 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.390176 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.390198 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:56Z","lastTransitionTime":"2025-12-09T11:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.493562 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.493947 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.494116 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.494293 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.494492 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:56Z","lastTransitionTime":"2025-12-09T11:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.535496 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.535568 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:56 crc kubenswrapper[4849]: E1209 11:27:56.535658 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.535530 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:56 crc kubenswrapper[4849]: E1209 11:27:56.535787 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:56 crc kubenswrapper[4849]: E1209 11:27:56.535851 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.597671 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.597713 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.597722 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.597737 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.597749 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:56Z","lastTransitionTime":"2025-12-09T11:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.699934 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.699983 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.699992 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.700006 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.700014 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:56Z","lastTransitionTime":"2025-12-09T11:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.802107 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.802146 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.802155 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.802168 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.802178 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:56Z","lastTransitionTime":"2025-12-09T11:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.904465 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.904496 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.904504 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.904516 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:56 crc kubenswrapper[4849]: I1209 11:27:56.904526 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:56Z","lastTransitionTime":"2025-12-09T11:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.007006 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.007049 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.007065 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.007083 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.007096 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:57Z","lastTransitionTime":"2025-12-09T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.108196 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/2.log" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.108873 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/1.log" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.109199 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.109251 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.109261 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.109280 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.109488 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:57Z","lastTransitionTime":"2025-12-09T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.112119 4849 generic.go:334] "Generic (PLEG): container finished" podID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerID="6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28" exitCode=1 Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.112154 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerDied","Data":"6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28"} Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.112184 4849 scope.go:117] "RemoveContainer" containerID="ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.113896 4849 scope.go:117] "RemoveContainer" containerID="6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28" Dec 09 11:27:57 crc kubenswrapper[4849]: E1209 11:27:57.114346 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6hf97_openshift-ovn-kubernetes(205e41c5-82b8-4bac-a27a-49f1e0da94e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.129242 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd5aaf6a-290c-4907-9138-e72fb2d70d47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63eb80d6ef78c44cac4d693ead4c3ba27c4a52a859347f8a1880d460aa03a7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://912e2384686e0ec62b9fa35a44eac781a123ce25d7966176317b63aef74dd153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d9ddc776af8966326e6ee92251b4a127247af456fabe67cf9c86a6cc2d4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.142483 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.156098 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.171817 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.188053 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.204514 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.211905 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.212095 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.212227 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.212330 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.212450 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:57Z","lastTransitionTime":"2025-12-09T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.221719 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.235125 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.244967 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.268275 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.282135 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.294543 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.307733 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.315014 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.315064 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.315076 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.315103 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.315124 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:57Z","lastTransitionTime":"2025-12-09T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.321803 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.340354 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.353586 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.367629 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.389350 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:32Z\\\",\\\"message\\\":\\\" openshift-multus/multus-h76bl\\\\nI1209 11:27:32.322288 6182 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1209 11:27:32.322290 6182 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1209 11:27:32.322293 6182 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-h76bl in node crc\\\\nF1209 11:27:32.322295 6182 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:56Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093363 6451 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1209 11:27:56.093677 6451 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093984 6451 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:56.094075 6451 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 11:27:56.094150 6451 factory.go:656] Stopping watch factory\\\\nI1209 11:27:56.094167 6451 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 11:27:56.098002 6451 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1209 11:27:56.098055 6451 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1209 11:27:56.098121 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1209 11:27:56.098164 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 11:27:56.098250 6451 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:57Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.417800 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.417870 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.417888 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.418283 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.418306 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:57Z","lastTransitionTime":"2025-12-09T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.521530 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.521590 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.521602 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.521621 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.521634 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:57Z","lastTransitionTime":"2025-12-09T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.535394 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:57 crc kubenswrapper[4849]: E1209 11:27:57.535610 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.625487 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.625542 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.625551 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.625566 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.625575 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:57Z","lastTransitionTime":"2025-12-09T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.728016 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.728057 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.728067 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.728082 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.728093 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:57Z","lastTransitionTime":"2025-12-09T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.830351 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.830446 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.830461 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.830492 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.830512 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:57Z","lastTransitionTime":"2025-12-09T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.933244 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.933277 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.933287 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.933300 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:57 crc kubenswrapper[4849]: I1209 11:27:57.933318 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:57Z","lastTransitionTime":"2025-12-09T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.035074 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.035104 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.035115 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.035131 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.035142 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:58Z","lastTransitionTime":"2025-12-09T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.115721 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/2.log" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.137225 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.137490 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.137617 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.137767 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.137903 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:58Z","lastTransitionTime":"2025-12-09T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.240823 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.240867 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.240878 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.240895 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.240906 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:58Z","lastTransitionTime":"2025-12-09T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.343356 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.343644 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.343920 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.344104 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.344253 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:58Z","lastTransitionTime":"2025-12-09T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.446852 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.447131 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.447196 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.447259 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.447314 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:58Z","lastTransitionTime":"2025-12-09T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.535830 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:27:58 crc kubenswrapper[4849]: E1209 11:27:58.536238 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.536103 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:27:58 crc kubenswrapper[4849]: E1209 11:27:58.536539 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.536308 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:27:58 crc kubenswrapper[4849]: E1209 11:27:58.536788 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.552013 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.552089 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.552109 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.552139 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.552164 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:58Z","lastTransitionTime":"2025-12-09T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.553531 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.569538 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.587462 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.606689 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.626056 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.640690 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.655784 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.655866 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.655938 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.655972 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.655994 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:58Z","lastTransitionTime":"2025-12-09T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.672989 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.691355 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.707813 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.724234 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.745395 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:32Z\\\",\\\"message\\\":\\\" openshift-multus/multus-h76bl\\\\nI1209 11:27:32.322288 6182 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1209 11:27:32.322290 6182 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1209 11:27:32.322293 6182 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-h76bl in node crc\\\\nF1209 11:27:32.322295 6182 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:56Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093363 6451 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1209 11:27:56.093677 6451 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093984 6451 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:56.094075 6451 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 11:27:56.094150 6451 factory.go:656] Stopping watch factory\\\\nI1209 11:27:56.094167 6451 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 11:27:56.098002 6451 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1209 11:27:56.098055 6451 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1209 11:27:56.098121 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1209 11:27:56.098164 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 11:27:56.098250 6451 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.759042 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.759113 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.759132 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.759159 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.759178 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:58Z","lastTransitionTime":"2025-12-09T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.764736 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.783195 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.802699 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.819086 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.832747 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.851374 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd5aaf6a-290c-4907-9138-e72fb2d70d47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63eb80d6ef78c44cac4d693ead4c3ba27c4a52a859347f8a1880d460aa03a7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://912e2384686e0ec62b9fa35a44eac781a123ce25d7966176317b63aef74dd153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d9ddc776af8966326e6ee92251b4a127247af456fabe67cf9c86a6cc2d4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.862453 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.862733 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.862799 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.862885 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.862962 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:58Z","lastTransitionTime":"2025-12-09T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.866689 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.965464 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.965510 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.965524 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.965574 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.965585 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:58Z","lastTransitionTime":"2025-12-09T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.975510 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.975560 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.975575 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.975592 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.975940 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:58Z","lastTransitionTime":"2025-12-09T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:58 crc kubenswrapper[4849]: E1209 11:27:58.991746 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:58Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.996044 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.996188 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.996274 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.996385 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:58 crc kubenswrapper[4849]: I1209 11:27:58.996580 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:58Z","lastTransitionTime":"2025-12-09T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:59 crc kubenswrapper[4849]: E1209 11:27:59.009454 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:59Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.013400 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.013476 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.013495 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.013519 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.013536 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:59Z","lastTransitionTime":"2025-12-09T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:59 crc kubenswrapper[4849]: E1209 11:27:59.026409 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:59Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.030599 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.030757 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.031060 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.031263 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.031464 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:59Z","lastTransitionTime":"2025-12-09T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:59 crc kubenswrapper[4849]: E1209 11:27:59.042571 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:59Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.046905 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.047051 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.047128 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.047210 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.047284 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:59Z","lastTransitionTime":"2025-12-09T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:59 crc kubenswrapper[4849]: E1209 11:27:59.061082 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:59Z is after 2025-08-24T17:21:41Z" Dec 09 11:27:59 crc kubenswrapper[4849]: E1209 11:27:59.061252 4849 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.068127 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.068161 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.068172 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.068188 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.068199 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:59Z","lastTransitionTime":"2025-12-09T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.170766 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.170805 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.170816 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.170833 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.170847 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:59Z","lastTransitionTime":"2025-12-09T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.274335 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.275066 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.275183 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.275275 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.275350 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:59Z","lastTransitionTime":"2025-12-09T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.377908 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.377946 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.377956 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.377970 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.377982 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:59Z","lastTransitionTime":"2025-12-09T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.482155 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.482509 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.482621 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.482718 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.482813 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:59Z","lastTransitionTime":"2025-12-09T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.536733 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:27:59 crc kubenswrapper[4849]: E1209 11:27:59.536935 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.585827 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.586228 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.586503 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.586613 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.586701 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:59Z","lastTransitionTime":"2025-12-09T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.690622 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.690678 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.690691 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.690712 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.690723 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:59Z","lastTransitionTime":"2025-12-09T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.794257 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.794844 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.794929 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.795018 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.795091 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:59Z","lastTransitionTime":"2025-12-09T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.899359 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.900554 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.900651 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.900758 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:27:59 crc kubenswrapper[4849]: I1209 11:27:59.900858 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:27:59Z","lastTransitionTime":"2025-12-09T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.004766 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.004828 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.004841 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.004860 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.004876 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:00Z","lastTransitionTime":"2025-12-09T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.108429 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.108474 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.108485 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.108513 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.108525 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:00Z","lastTransitionTime":"2025-12-09T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.212353 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.212439 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.212456 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.212477 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.212493 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:00Z","lastTransitionTime":"2025-12-09T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.320597 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.320642 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.320655 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.320677 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.320691 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:00Z","lastTransitionTime":"2025-12-09T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.424333 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.424506 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.424527 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.424551 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.424569 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:00Z","lastTransitionTime":"2025-12-09T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.528650 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.528701 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.528713 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.528733 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.528747 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:00Z","lastTransitionTime":"2025-12-09T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.536105 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.536191 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:00 crc kubenswrapper[4849]: E1209 11:28:00.536293 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:00 crc kubenswrapper[4849]: E1209 11:28:00.536491 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.536622 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:00 crc kubenswrapper[4849]: E1209 11:28:00.536723 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.632510 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.632567 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.632579 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.632617 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.632632 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:00Z","lastTransitionTime":"2025-12-09T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.740814 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.740895 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.740910 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.740934 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.740953 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:00Z","lastTransitionTime":"2025-12-09T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.844201 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.844740 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.844827 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.844895 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.844960 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:00Z","lastTransitionTime":"2025-12-09T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.948263 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.949028 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.949147 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.949256 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:00 crc kubenswrapper[4849]: I1209 11:28:00.949336 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:00Z","lastTransitionTime":"2025-12-09T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.053201 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.053259 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.053272 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.053295 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.053309 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:01Z","lastTransitionTime":"2025-12-09T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.156466 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.156901 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.156980 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.157048 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.157122 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:01Z","lastTransitionTime":"2025-12-09T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.260673 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.260720 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.260738 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.260758 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.260773 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:01Z","lastTransitionTime":"2025-12-09T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.364054 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.364108 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.364117 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.364142 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.364155 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:01Z","lastTransitionTime":"2025-12-09T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.466953 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.467013 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.467027 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.467046 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.467060 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:01Z","lastTransitionTime":"2025-12-09T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.535891 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:01 crc kubenswrapper[4849]: E1209 11:28:01.536027 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.569249 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.569287 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.569298 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.569315 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.569327 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:01Z","lastTransitionTime":"2025-12-09T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.672276 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.672314 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.672323 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.672341 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.672355 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:01Z","lastTransitionTime":"2025-12-09T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.775323 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.775360 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.775369 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.775390 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.775402 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:01Z","lastTransitionTime":"2025-12-09T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.878803 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.879297 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.879384 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.879514 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.879598 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:01Z","lastTransitionTime":"2025-12-09T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.983270 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.983320 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.983330 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.983353 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:01 crc kubenswrapper[4849]: I1209 11:28:01.983364 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:01Z","lastTransitionTime":"2025-12-09T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.086528 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.086585 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.086597 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.086620 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.086636 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:02Z","lastTransitionTime":"2025-12-09T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.188953 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.188990 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.189001 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.189017 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.189027 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:02Z","lastTransitionTime":"2025-12-09T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.291955 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.291994 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.292004 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.292021 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.292034 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:02Z","lastTransitionTime":"2025-12-09T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.395178 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.395212 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.395222 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.395239 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.395249 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:02Z","lastTransitionTime":"2025-12-09T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.499031 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.499070 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.499080 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.499099 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.499110 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:02Z","lastTransitionTime":"2025-12-09T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.536631 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.536631 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.536651 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:02 crc kubenswrapper[4849]: E1209 11:28:02.536810 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:02 crc kubenswrapper[4849]: E1209 11:28:02.536865 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:02 crc kubenswrapper[4849]: E1209 11:28:02.536917 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.601795 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.601835 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.601845 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.601860 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.601870 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:02Z","lastTransitionTime":"2025-12-09T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.704074 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.704113 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.704123 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.704141 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.704152 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:02Z","lastTransitionTime":"2025-12-09T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.807122 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.807185 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.807197 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.807213 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.807222 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:02Z","lastTransitionTime":"2025-12-09T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.909369 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.909405 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.909436 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.909449 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:02 crc kubenswrapper[4849]: I1209 11:28:02.909458 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:02Z","lastTransitionTime":"2025-12-09T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.011402 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.011450 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.011459 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.011473 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.011482 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:03Z","lastTransitionTime":"2025-12-09T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.113777 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.113809 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.113817 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.113830 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.113839 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:03Z","lastTransitionTime":"2025-12-09T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.216254 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.216305 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.216317 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.216334 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.216650 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:03Z","lastTransitionTime":"2025-12-09T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.319331 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.319359 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.319372 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.319430 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.319443 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:03Z","lastTransitionTime":"2025-12-09T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.421534 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.421573 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.421584 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.421603 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.421617 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:03Z","lastTransitionTime":"2025-12-09T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.524699 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.524970 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.525038 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.525105 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.525164 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:03Z","lastTransitionTime":"2025-12-09T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.536005 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:03 crc kubenswrapper[4849]: E1209 11:28:03.536142 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.627630 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.627658 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.627665 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.627679 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.627687 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:03Z","lastTransitionTime":"2025-12-09T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.729789 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.729825 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.729836 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.729851 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.729861 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:03Z","lastTransitionTime":"2025-12-09T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.832016 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.832051 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.832060 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.832074 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.832083 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:03Z","lastTransitionTime":"2025-12-09T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.933953 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.933990 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.934000 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.934014 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:03 crc kubenswrapper[4849]: I1209 11:28:03.934024 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:03Z","lastTransitionTime":"2025-12-09T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.035868 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.035895 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.035907 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.035924 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.035935 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:04Z","lastTransitionTime":"2025-12-09T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.143532 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.143567 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.143578 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.143592 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.143602 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:04Z","lastTransitionTime":"2025-12-09T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.246097 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.246139 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.246151 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.246173 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.246186 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:04Z","lastTransitionTime":"2025-12-09T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.348467 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.348513 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.348522 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.348537 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.348547 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:04Z","lastTransitionTime":"2025-12-09T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.450926 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.450959 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.450970 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.450985 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.450996 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:04Z","lastTransitionTime":"2025-12-09T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.536276 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:04 crc kubenswrapper[4849]: E1209 11:28:04.536565 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.536604 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.536636 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:04 crc kubenswrapper[4849]: E1209 11:28:04.536774 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:04 crc kubenswrapper[4849]: E1209 11:28:04.536919 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.552944 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.553012 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.553028 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.553057 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.553073 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:04Z","lastTransitionTime":"2025-12-09T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.656316 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.656348 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.656357 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.656373 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.656385 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:04Z","lastTransitionTime":"2025-12-09T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.759108 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.759149 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.759161 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.759177 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.759188 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:04Z","lastTransitionTime":"2025-12-09T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.864944 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.864995 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.865007 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.865026 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.865042 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:04Z","lastTransitionTime":"2025-12-09T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.967058 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.967098 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.967108 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.967120 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:04 crc kubenswrapper[4849]: I1209 11:28:04.967129 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:04Z","lastTransitionTime":"2025-12-09T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.069651 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.069689 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.069700 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.069717 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.069728 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:05Z","lastTransitionTime":"2025-12-09T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.171645 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.171672 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.171680 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.171692 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.171702 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:05Z","lastTransitionTime":"2025-12-09T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.274379 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.274481 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.274500 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.274528 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.274550 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:05Z","lastTransitionTime":"2025-12-09T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.376775 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.376819 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.376833 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.376848 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.376858 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:05Z","lastTransitionTime":"2025-12-09T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.479879 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.479916 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.479927 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.479942 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.479955 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:05Z","lastTransitionTime":"2025-12-09T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.536382 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:05 crc kubenswrapper[4849]: E1209 11:28:05.536540 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.582941 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.583041 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.583062 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.583136 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.583162 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:05Z","lastTransitionTime":"2025-12-09T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.685962 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.686023 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.686037 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.686056 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.686089 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:05Z","lastTransitionTime":"2025-12-09T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.729727 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs\") pod \"network-metrics-daemon-qcffq\" (UID: \"fa5f421b-d486-4b0d-a615-7887df025c00\") " pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:05 crc kubenswrapper[4849]: E1209 11:28:05.729842 4849 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:28:05 crc kubenswrapper[4849]: E1209 11:28:05.729890 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs podName:fa5f421b-d486-4b0d-a615-7887df025c00 nodeName:}" failed. No retries permitted until 2025-12-09 11:28:37.729875443 +0000 UTC m=+100.269759749 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs") pod "network-metrics-daemon-qcffq" (UID: "fa5f421b-d486-4b0d-a615-7887df025c00") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.787890 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.787927 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.787937 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.787951 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.787960 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:05Z","lastTransitionTime":"2025-12-09T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.890770 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.890812 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.890830 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.890848 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.890860 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:05Z","lastTransitionTime":"2025-12-09T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.993499 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.993562 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.993574 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.993610 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:05 crc kubenswrapper[4849]: I1209 11:28:05.993624 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:05Z","lastTransitionTime":"2025-12-09T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.096391 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.096476 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.096489 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.096526 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.096542 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:06Z","lastTransitionTime":"2025-12-09T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.199768 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.199872 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.199895 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.199957 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.199983 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:06Z","lastTransitionTime":"2025-12-09T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.303911 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.303952 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.303962 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.303978 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.303988 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:06Z","lastTransitionTime":"2025-12-09T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.406906 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.406942 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.406951 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.406965 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.406977 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:06Z","lastTransitionTime":"2025-12-09T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.509597 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.509634 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.509642 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.509657 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.509667 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:06Z","lastTransitionTime":"2025-12-09T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.536099 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:06 crc kubenswrapper[4849]: E1209 11:28:06.536192 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.536099 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.536099 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:06 crc kubenswrapper[4849]: E1209 11:28:06.536278 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:06 crc kubenswrapper[4849]: E1209 11:28:06.536363 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.611330 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.611370 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.611379 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.611394 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.611403 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:06Z","lastTransitionTime":"2025-12-09T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.713666 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.713741 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.713754 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.713769 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.713782 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:06Z","lastTransitionTime":"2025-12-09T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.816230 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.816280 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.816297 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.816317 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.816329 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:06Z","lastTransitionTime":"2025-12-09T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.918510 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.918542 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.918553 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.918568 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:06 crc kubenswrapper[4849]: I1209 11:28:06.918578 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:06Z","lastTransitionTime":"2025-12-09T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.021092 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.021146 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.021159 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.021175 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.021185 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:07Z","lastTransitionTime":"2025-12-09T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.123113 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.123174 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.123189 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.123205 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.123216 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:07Z","lastTransitionTime":"2025-12-09T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.225069 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.225126 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.225140 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.225155 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.225166 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:07Z","lastTransitionTime":"2025-12-09T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.327360 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.327403 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.327432 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.327452 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.327469 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:07Z","lastTransitionTime":"2025-12-09T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.430109 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.430145 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.430156 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.430171 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.430181 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:07Z","lastTransitionTime":"2025-12-09T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.532604 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.532635 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.532645 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.532661 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.532673 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:07Z","lastTransitionTime":"2025-12-09T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.535821 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:07 crc kubenswrapper[4849]: E1209 11:28:07.535966 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.634600 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.634872 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.634958 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.635049 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.635121 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:07Z","lastTransitionTime":"2025-12-09T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.739389 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.739475 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.739488 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.739504 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.739516 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:07Z","lastTransitionTime":"2025-12-09T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.841822 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.841862 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.841873 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.841890 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.841901 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:07Z","lastTransitionTime":"2025-12-09T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.944530 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.944564 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.944572 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.944588 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:07 crc kubenswrapper[4849]: I1209 11:28:07.944599 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:07Z","lastTransitionTime":"2025-12-09T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.047326 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.047364 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.047371 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.047386 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.047395 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:08Z","lastTransitionTime":"2025-12-09T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.149400 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.150132 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.150173 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.150196 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.150209 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:08Z","lastTransitionTime":"2025-12-09T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.252384 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.252439 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.252448 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.252463 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.252475 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:08Z","lastTransitionTime":"2025-12-09T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.355347 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.355433 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.355447 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.355465 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.355481 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:08Z","lastTransitionTime":"2025-12-09T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.458224 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.458258 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.458268 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.458283 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.458293 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:08Z","lastTransitionTime":"2025-12-09T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.535979 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.535986 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:08 crc kubenswrapper[4849]: E1209 11:28:08.536108 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.535995 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:08 crc kubenswrapper[4849]: E1209 11:28:08.536486 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:08 crc kubenswrapper[4849]: E1209 11:28:08.536562 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.550518 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd5aaf6a-290c-4907-9138-e72fb2d70d47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63eb80d6ef78c44cac4d693ead4c3ba27c4a52a859347f8a1880d460aa03a7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://912e2384686e0ec62b9fa35a44eac781a123ce25d7966176317b63aef74dd153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d9ddc776af8966326e6ee92251b4a127247af456fabe67cf9c86a6cc2d4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.560495 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.560529 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.560539 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.560553 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.560564 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:08Z","lastTransitionTime":"2025-12-09T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.561365 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.575907 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.589293 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.601700 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.612212 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.626483 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.640745 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.651391 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.663311 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.663356 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.663367 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.663384 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.663400 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:08Z","lastTransitionTime":"2025-12-09T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.677175 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.691481 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.703998 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.718139 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.732365 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.748137 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.760958 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.766546 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.766582 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.766592 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.766608 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.766619 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:08Z","lastTransitionTime":"2025-12-09T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.774662 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.793547 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed4f288808ad08a547b730f55c9019750e22c44b9ffb3a747fd331574c388f1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:32Z\\\",\\\"message\\\":\\\" openshift-multus/multus-h76bl\\\\nI1209 11:27:32.322288 6182 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1209 11:27:32.322290 6182 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1209 11:27:32.322293 6182 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-h76bl in node crc\\\\nF1209 11:27:32.322295 6182 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:27:32Z is after 2025-08-24T17:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:56Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093363 6451 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1209 11:27:56.093677 6451 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093984 6451 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:56.094075 6451 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 11:27:56.094150 6451 factory.go:656] Stopping watch factory\\\\nI1209 11:27:56.094167 6451 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 11:27:56.098002 6451 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1209 11:27:56.098055 6451 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1209 11:27:56.098121 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1209 11:27:56.098164 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 11:27:56.098250 6451 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:08Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.869200 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.869247 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.869259 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.869276 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.869296 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:08Z","lastTransitionTime":"2025-12-09T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.972374 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.972472 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.972493 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.972517 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:08 crc kubenswrapper[4849]: I1209 11:28:08.972536 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:08Z","lastTransitionTime":"2025-12-09T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.074582 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.074617 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.074626 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.074640 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.074649 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:09Z","lastTransitionTime":"2025-12-09T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:09 crc kubenswrapper[4849]: E1209 11:28:09.086765 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.090285 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.090353 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.090373 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.090393 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.090429 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:09Z","lastTransitionTime":"2025-12-09T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:09 crc kubenswrapper[4849]: E1209 11:28:09.103678 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.107809 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.107841 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.107849 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.107863 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.107889 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:09Z","lastTransitionTime":"2025-12-09T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:09 crc kubenswrapper[4849]: E1209 11:28:09.119463 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.122594 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.122619 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.122627 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.122639 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.122647 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:09Z","lastTransitionTime":"2025-12-09T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:09 crc kubenswrapper[4849]: E1209 11:28:09.133185 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.136775 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.136799 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.136809 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.136822 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.136831 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:09Z","lastTransitionTime":"2025-12-09T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:09 crc kubenswrapper[4849]: E1209 11:28:09.148275 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: E1209 11:28:09.148496 4849 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.150044 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.150077 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.150089 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.150103 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.150114 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:09Z","lastTransitionTime":"2025-12-09T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.252181 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.252219 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.252229 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.252243 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.252253 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:09Z","lastTransitionTime":"2025-12-09T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.354502 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.354535 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.354545 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.354560 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.354571 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:09Z","lastTransitionTime":"2025-12-09T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.456949 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.456987 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.456997 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.457014 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.457035 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:09Z","lastTransitionTime":"2025-12-09T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.535711 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:09 crc kubenswrapper[4849]: E1209 11:28:09.535908 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.536743 4849 scope.go:117] "RemoveContainer" containerID="6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28" Dec 09 11:28:09 crc kubenswrapper[4849]: E1209 11:28:09.536993 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6hf97_openshift-ovn-kubernetes(205e41c5-82b8-4bac-a27a-49f1e0da94e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.553272 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.559828 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.559894 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.559908 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.559925 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.559936 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:09Z","lastTransitionTime":"2025-12-09T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.572037 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.585046 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.597372 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.609242 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.637320 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.651602 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.662534 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.662581 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.662598 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.662621 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.662636 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:09Z","lastTransitionTime":"2025-12-09T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.667806 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.680035 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.697000 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:56Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093363 6451 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1209 11:27:56.093677 6451 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093984 6451 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:56.094075 6451 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 11:27:56.094150 6451 factory.go:656] Stopping watch factory\\\\nI1209 11:27:56.094167 6451 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 11:27:56.098002 6451 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1209 11:27:56.098055 6451 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1209 11:27:56.098121 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1209 11:27:56.098164 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 11:27:56.098250 6451 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6hf97_openshift-ovn-kubernetes(205e41c5-82b8-4bac-a27a-49f1e0da94e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.711215 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.726787 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.739781 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.750433 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.762042 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.764987 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.765021 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.765031 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.765046 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.765059 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:09Z","lastTransitionTime":"2025-12-09T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.777433 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd5aaf6a-290c-4907-9138-e72fb2d70d47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63eb80d6ef78c44cac4d693ead4c3ba27c4a52a859347f8a1880d460aa03a7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://912e2384686e0ec62b9fa35a44eac781a123ce25d7966176317b63aef74dd153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d9ddc776af8966326e6ee92251b4a127247af456fabe67cf9c86a6cc2d4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.788155 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.799969 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:09Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.867862 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.867912 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.867927 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.867947 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.867961 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:09Z","lastTransitionTime":"2025-12-09T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.971210 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.971498 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.971604 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.971707 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:09 crc kubenswrapper[4849]: I1209 11:28:09.971798 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:09Z","lastTransitionTime":"2025-12-09T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.074647 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.074679 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.074689 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.074706 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.074716 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:10Z","lastTransitionTime":"2025-12-09T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.161657 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h76bl_e5c6e29f-6131-4daa-b297-81eb53e7384c/kube-multus/0.log" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.161705 4849 generic.go:334] "Generic (PLEG): container finished" podID="e5c6e29f-6131-4daa-b297-81eb53e7384c" containerID="362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40" exitCode=1 Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.161741 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h76bl" event={"ID":"e5c6e29f-6131-4daa-b297-81eb53e7384c","Type":"ContainerDied","Data":"362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40"} Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.162141 4849 scope.go:117] "RemoveContainer" containerID="362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.177606 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.177647 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.177657 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.177672 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.177681 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:10Z","lastTransitionTime":"2025-12-09T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.177690 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.194190 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.206322 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.219039 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd5aaf6a-290c-4907-9138-e72fb2d70d47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63eb80d6ef78c44cac4d693ead4c3ba27c4a52a859347f8a1880d460aa03a7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://912e2384686e0ec62b9fa35a44eac781a123ce25d7966176317b63aef74dd153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d9ddc776af8966326e6ee92251b4a127247af456fabe67cf9c86a6cc2d4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.232108 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.244613 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.259261 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.275791 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.280162 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.280218 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.280232 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.280256 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.280270 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:10Z","lastTransitionTime":"2025-12-09T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.288949 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.302776 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"2025-12-09T11:27:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7c02c98a-3fbd-42da-a57d-046eea25533f\\\\n2025-12-09T11:27:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7c02c98a-3fbd-42da-a57d-046eea25533f to /host/opt/cni/bin/\\\\n2025-12-09T11:27:24Z [verbose] multus-daemon started\\\\n2025-12-09T11:27:24Z [verbose] Readiness Indicator file check\\\\n2025-12-09T11:28:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.323632 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.347791 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.365536 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.384023 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.384073 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.384086 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.384107 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.384119 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:10Z","lastTransitionTime":"2025-12-09T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.385598 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.399587 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.422852 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:56Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093363 6451 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1209 11:27:56.093677 6451 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093984 6451 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:56.094075 6451 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 11:27:56.094150 6451 factory.go:656] Stopping watch factory\\\\nI1209 11:27:56.094167 6451 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 11:27:56.098002 6451 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1209 11:27:56.098055 6451 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1209 11:27:56.098121 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1209 11:27:56.098164 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 11:27:56.098250 6451 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6hf97_openshift-ovn-kubernetes(205e41c5-82b8-4bac-a27a-49f1e0da94e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.436234 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.448323 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:10Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.486950 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.486996 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.487008 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.487025 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.487037 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:10Z","lastTransitionTime":"2025-12-09T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.536380 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.536398 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.536420 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:10 crc kubenswrapper[4849]: E1209 11:28:10.536711 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:10 crc kubenswrapper[4849]: E1209 11:28:10.536846 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:10 crc kubenswrapper[4849]: E1209 11:28:10.536909 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.589337 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.589385 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.589397 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.589435 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.589453 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:10Z","lastTransitionTime":"2025-12-09T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.692097 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.692132 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.692142 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.692159 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.692170 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:10Z","lastTransitionTime":"2025-12-09T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.794699 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.794743 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.794752 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.794767 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.794778 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:10Z","lastTransitionTime":"2025-12-09T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.896942 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.896978 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.896988 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.897002 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:10 crc kubenswrapper[4849]: I1209 11:28:10.897012 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:10Z","lastTransitionTime":"2025-12-09T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:10.998972 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:10.998996 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:10.999004 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:10.999016 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:10.999024 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:10Z","lastTransitionTime":"2025-12-09T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.100920 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.100956 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.100970 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.100991 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.101002 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:11Z","lastTransitionTime":"2025-12-09T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.167126 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h76bl_e5c6e29f-6131-4daa-b297-81eb53e7384c/kube-multus/0.log" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.167172 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h76bl" event={"ID":"e5c6e29f-6131-4daa-b297-81eb53e7384c","Type":"ContainerStarted","Data":"954600766ab4dd73fd7ff676e1ff4e6e53acdc03033e3f96d03582f2b268e54b"} Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.182476 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.193630 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.207692 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.208133 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.208165 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.208181 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.208201 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.208262 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:11Z","lastTransitionTime":"2025-12-09T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.227196 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:56Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093363 6451 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1209 11:27:56.093677 6451 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093984 6451 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:56.094075 6451 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 11:27:56.094150 6451 factory.go:656] Stopping watch factory\\\\nI1209 11:27:56.094167 6451 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 11:27:56.098002 6451 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1209 11:27:56.098055 6451 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1209 11:27:56.098121 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1209 11:27:56.098164 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 11:27:56.098250 6451 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6hf97_openshift-ovn-kubernetes(205e41c5-82b8-4bac-a27a-49f1e0da94e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.241038 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd5aaf6a-290c-4907-9138-e72fb2d70d47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63eb80d6ef78c44cac4d693ead4c3ba27c4a52a859347f8a1880d460aa03a7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://912e2384686e0ec62b9fa35a44eac781a123ce25d7966176317b63aef74dd153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d9ddc776af8966326e6ee92251b4a127247af456fabe67cf9c86a6cc2d4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.251451 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.262841 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.274122 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.285326 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.295024 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.306353 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.310366 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.310398 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.310422 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.310440 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.310454 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:11Z","lastTransitionTime":"2025-12-09T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.318317 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.326813 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.344104 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.357384 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.373482 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.389622 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954600766ab4dd73fd7ff676e1ff4e6e53acdc03033e3f96d03582f2b268e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"2025-12-09T11:27:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7c02c98a-3fbd-42da-a57d-046eea25533f\\\\n2025-12-09T11:27:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7c02c98a-3fbd-42da-a57d-046eea25533f to /host/opt/cni/bin/\\\\n2025-12-09T11:27:24Z [verbose] multus-daemon started\\\\n2025-12-09T11:27:24Z [verbose] Readiness Indicator file check\\\\n2025-12-09T11:28:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.406122 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:11Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.413039 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.413078 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.413091 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.413108 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.413120 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:11Z","lastTransitionTime":"2025-12-09T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.515163 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.515189 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.515197 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.515211 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.515220 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:11Z","lastTransitionTime":"2025-12-09T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.535991 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:11 crc kubenswrapper[4849]: E1209 11:28:11.536174 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.617081 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.617120 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.617131 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.617147 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.617159 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:11Z","lastTransitionTime":"2025-12-09T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.719702 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.719762 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.719776 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.719791 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.719803 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:11Z","lastTransitionTime":"2025-12-09T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.822025 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.822083 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.822098 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.822117 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.822128 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:11Z","lastTransitionTime":"2025-12-09T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.925706 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.925762 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.925774 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.925788 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:11 crc kubenswrapper[4849]: I1209 11:28:11.925797 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:11Z","lastTransitionTime":"2025-12-09T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.028130 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.028233 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.028249 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.028268 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.028280 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:12Z","lastTransitionTime":"2025-12-09T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.131229 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.131280 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.131292 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.131309 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.131320 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:12Z","lastTransitionTime":"2025-12-09T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.233753 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.233812 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.233827 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.233848 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.233862 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:12Z","lastTransitionTime":"2025-12-09T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.336013 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.336082 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.336101 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.336126 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.336144 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:12Z","lastTransitionTime":"2025-12-09T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.439096 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.439136 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.439147 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.439163 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.439173 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:12Z","lastTransitionTime":"2025-12-09T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.535886 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.535905 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.535904 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:12 crc kubenswrapper[4849]: E1209 11:28:12.536217 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:12 crc kubenswrapper[4849]: E1209 11:28:12.536034 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:12 crc kubenswrapper[4849]: E1209 11:28:12.536318 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.541190 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.541224 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.541237 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.541254 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.541269 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:12Z","lastTransitionTime":"2025-12-09T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.644257 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.644299 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.644310 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.644329 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.644339 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:12Z","lastTransitionTime":"2025-12-09T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.747814 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.747850 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.747860 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.747877 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.747888 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:12Z","lastTransitionTime":"2025-12-09T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.850537 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.850576 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.850587 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.850607 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.850620 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:12Z","lastTransitionTime":"2025-12-09T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.953537 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.953585 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.953593 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.953612 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:12 crc kubenswrapper[4849]: I1209 11:28:12.953625 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:12Z","lastTransitionTime":"2025-12-09T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.056797 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.056855 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.056866 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.056909 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.056922 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:13Z","lastTransitionTime":"2025-12-09T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.159751 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.159810 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.159823 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.159845 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.159858 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:13Z","lastTransitionTime":"2025-12-09T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.263148 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.263190 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.263199 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.263215 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.263224 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:13Z","lastTransitionTime":"2025-12-09T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.365906 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.365950 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.365963 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.365983 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.365998 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:13Z","lastTransitionTime":"2025-12-09T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.468721 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.468768 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.468783 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.468800 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.468811 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:13Z","lastTransitionTime":"2025-12-09T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.536379 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:13 crc kubenswrapper[4849]: E1209 11:28:13.536654 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.571846 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.571884 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.572671 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.572709 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.572729 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:13Z","lastTransitionTime":"2025-12-09T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.674940 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.675007 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.675026 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.675051 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.675067 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:13Z","lastTransitionTime":"2025-12-09T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.777849 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.777934 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.777946 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.777962 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.777972 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:13Z","lastTransitionTime":"2025-12-09T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.881122 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.881173 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.881201 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.881223 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.881235 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:13Z","lastTransitionTime":"2025-12-09T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.983962 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.984040 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.984051 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.984073 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:13 crc kubenswrapper[4849]: I1209 11:28:13.984092 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:13Z","lastTransitionTime":"2025-12-09T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.087579 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.087622 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.087634 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.087651 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.087662 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:14Z","lastTransitionTime":"2025-12-09T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.196594 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.196654 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.196674 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.196709 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.196728 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:14Z","lastTransitionTime":"2025-12-09T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.299329 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.299378 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.299391 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.299428 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.299441 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:14Z","lastTransitionTime":"2025-12-09T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.402910 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.402949 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.402960 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.402977 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.402988 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:14Z","lastTransitionTime":"2025-12-09T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.505232 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.505266 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.505273 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.505288 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.505296 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:14Z","lastTransitionTime":"2025-12-09T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.535622 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:14 crc kubenswrapper[4849]: E1209 11:28:14.535758 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.536009 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:14 crc kubenswrapper[4849]: E1209 11:28:14.536059 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.536185 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:14 crc kubenswrapper[4849]: E1209 11:28:14.536263 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.607719 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.607785 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.607800 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.607901 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.607925 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:14Z","lastTransitionTime":"2025-12-09T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.710391 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.710538 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.710568 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.710602 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.710626 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:14Z","lastTransitionTime":"2025-12-09T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.813117 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.813170 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.813181 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.813197 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.813208 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:14Z","lastTransitionTime":"2025-12-09T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.916314 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.916345 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.916357 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.916370 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:14 crc kubenswrapper[4849]: I1209 11:28:14.916381 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:14Z","lastTransitionTime":"2025-12-09T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.018705 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.018752 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.018767 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.018787 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.018801 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:15Z","lastTransitionTime":"2025-12-09T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.121755 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.122060 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.122168 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.122279 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.122374 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:15Z","lastTransitionTime":"2025-12-09T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.226339 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.226388 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.226403 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.226444 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.226458 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:15Z","lastTransitionTime":"2025-12-09T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.329766 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.330716 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.331034 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.331363 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.331552 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:15Z","lastTransitionTime":"2025-12-09T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.434830 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.434874 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.434886 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.434902 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.434914 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:15Z","lastTransitionTime":"2025-12-09T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.536301 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:15 crc kubenswrapper[4849]: E1209 11:28:15.536658 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.538943 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.539000 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.539012 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.539033 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.539084 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:15Z","lastTransitionTime":"2025-12-09T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.642529 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.642602 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.642622 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.642650 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.642670 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:15Z","lastTransitionTime":"2025-12-09T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.746074 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.746125 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.746137 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.746159 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.746173 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:15Z","lastTransitionTime":"2025-12-09T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.849369 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.849446 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.849459 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.849485 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.849505 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:15Z","lastTransitionTime":"2025-12-09T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.951901 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.951952 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.951962 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.951982 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:15 crc kubenswrapper[4849]: I1209 11:28:15.951994 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:15Z","lastTransitionTime":"2025-12-09T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.054649 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.054699 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.054709 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.054728 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.054741 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:16Z","lastTransitionTime":"2025-12-09T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.156944 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.157005 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.157018 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.157037 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.157051 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:16Z","lastTransitionTime":"2025-12-09T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.260844 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.260889 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.260903 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.260919 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.260931 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:16Z","lastTransitionTime":"2025-12-09T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.363673 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.363707 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.363718 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.363732 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.363742 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:16Z","lastTransitionTime":"2025-12-09T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.466322 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.466363 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.466373 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.466388 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.466397 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:16Z","lastTransitionTime":"2025-12-09T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.536631 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.536706 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:16 crc kubenswrapper[4849]: E1209 11:28:16.536777 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.536658 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:16 crc kubenswrapper[4849]: E1209 11:28:16.536924 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:16 crc kubenswrapper[4849]: E1209 11:28:16.536965 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.571583 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.571635 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.571651 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.571670 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.571688 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:16Z","lastTransitionTime":"2025-12-09T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.673892 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.674228 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.674366 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.674534 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.674667 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:16Z","lastTransitionTime":"2025-12-09T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.777813 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.777882 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.777892 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.777906 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.777918 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:16Z","lastTransitionTime":"2025-12-09T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.880381 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.880655 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.880724 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.880799 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.880883 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:16Z","lastTransitionTime":"2025-12-09T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.983743 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.983797 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.983810 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.983829 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:16 crc kubenswrapper[4849]: I1209 11:28:16.983840 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:16Z","lastTransitionTime":"2025-12-09T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.086383 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.086440 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.086453 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.086470 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.086482 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:17Z","lastTransitionTime":"2025-12-09T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.188279 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.188328 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.188340 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.188357 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.188370 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:17Z","lastTransitionTime":"2025-12-09T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.291289 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.291348 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.291361 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.291395 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.291441 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:17Z","lastTransitionTime":"2025-12-09T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.394206 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.394262 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.394290 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.394315 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.394334 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:17Z","lastTransitionTime":"2025-12-09T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.496937 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.496975 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.496986 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.497001 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.497015 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:17Z","lastTransitionTime":"2025-12-09T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.535726 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:17 crc kubenswrapper[4849]: E1209 11:28:17.535879 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.599732 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.599778 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.599788 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.599803 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.599812 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:17Z","lastTransitionTime":"2025-12-09T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.702569 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.702611 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.702621 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.702637 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.702648 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:17Z","lastTransitionTime":"2025-12-09T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.804639 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.804707 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.804720 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.804736 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.804778 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:17Z","lastTransitionTime":"2025-12-09T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.906806 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.906845 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.906854 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.906869 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:17 crc kubenswrapper[4849]: I1209 11:28:17.906879 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:17Z","lastTransitionTime":"2025-12-09T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.008819 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.008901 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.008912 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.008929 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.008941 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:18Z","lastTransitionTime":"2025-12-09T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.112635 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.112674 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.112685 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.112702 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.112713 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:18Z","lastTransitionTime":"2025-12-09T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.215960 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.216021 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.216043 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.216071 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.216092 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:18Z","lastTransitionTime":"2025-12-09T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.319080 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.319127 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.319139 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.319157 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.319172 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:18Z","lastTransitionTime":"2025-12-09T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.422754 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.422802 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.422813 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.422833 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.422844 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:18Z","lastTransitionTime":"2025-12-09T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.525335 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.525374 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.525382 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.525396 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.525429 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:18Z","lastTransitionTime":"2025-12-09T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.535722 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:18 crc kubenswrapper[4849]: E1209 11:28:18.535864 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.535731 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:18 crc kubenswrapper[4849]: E1209 11:28:18.535965 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.535976 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:18 crc kubenswrapper[4849]: E1209 11:28:18.536160 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.551900 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.565367 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.582053 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954600766ab4dd73fd7ff676e1ff4e6e53acdc03033e3f96d03582f2b268e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"2025-12-09T11:27:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7c02c98a-3fbd-42da-a57d-046eea25533f\\\\n2025-12-09T11:27:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7c02c98a-3fbd-42da-a57d-046eea25533f to /host/opt/cni/bin/\\\\n2025-12-09T11:27:24Z [verbose] multus-daemon started\\\\n2025-12-09T11:27:24Z [verbose] Readiness Indicator file check\\\\n2025-12-09T11:28:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.594940 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.627722 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.627777 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.627788 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.627808 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.627821 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:18Z","lastTransitionTime":"2025-12-09T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.630977 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.648310 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.664231 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.679107 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.701683 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:56Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093363 6451 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1209 11:27:56.093677 6451 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093984 6451 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:56.094075 6451 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 11:27:56.094150 6451 factory.go:656] Stopping watch factory\\\\nI1209 11:27:56.094167 6451 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 11:27:56.098002 6451 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1209 11:27:56.098055 6451 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1209 11:27:56.098121 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1209 11:27:56.098164 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 11:27:56.098250 6451 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6hf97_openshift-ovn-kubernetes(205e41c5-82b8-4bac-a27a-49f1e0da94e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.718734 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.730508 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.730544 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.730555 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.730573 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.730585 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:18Z","lastTransitionTime":"2025-12-09T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.735582 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.750320 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.766137 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.779515 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.793398 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd5aaf6a-290c-4907-9138-e72fb2d70d47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63eb80d6ef78c44cac4d693ead4c3ba27c4a52a859347f8a1880d460aa03a7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://912e2384686e0ec62b9fa35a44eac781a123ce25d7966176317b63aef74dd153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d9ddc776af8966326e6ee92251b4a127247af456fabe67cf9c86a6cc2d4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.811620 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.824096 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.834277 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.834316 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.834324 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.834342 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.834354 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:18Z","lastTransitionTime":"2025-12-09T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.839597 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:18Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.936921 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.937247 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.937387 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.937522 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:18 crc kubenswrapper[4849]: I1209 11:28:18.937613 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:18Z","lastTransitionTime":"2025-12-09T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.039584 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.039657 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.039668 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.039690 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.039706 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:19Z","lastTransitionTime":"2025-12-09T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.142097 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.142162 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.142176 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.142205 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.142224 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:19Z","lastTransitionTime":"2025-12-09T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.201761 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.201877 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.201897 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.201922 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.201938 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:19Z","lastTransitionTime":"2025-12-09T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:19 crc kubenswrapper[4849]: E1209 11:28:19.224038 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:19Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.229319 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.229367 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.229381 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.229401 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.229440 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:19Z","lastTransitionTime":"2025-12-09T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:19 crc kubenswrapper[4849]: E1209 11:28:19.246087 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:19Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.250893 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.250963 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.250978 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.251001 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.251017 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:19Z","lastTransitionTime":"2025-12-09T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:19 crc kubenswrapper[4849]: E1209 11:28:19.270231 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:19Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.275373 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.275433 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.275446 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.275467 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.275485 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:19Z","lastTransitionTime":"2025-12-09T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:19 crc kubenswrapper[4849]: E1209 11:28:19.290764 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:19Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.295932 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.295973 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.295983 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.296002 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.296013 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:19Z","lastTransitionTime":"2025-12-09T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:19 crc kubenswrapper[4849]: E1209 11:28:19.308987 4849 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6e561bc1-3071-42d3-8f8a-26cb48f3e35f\\\",\\\"systemUUID\\\":\\\"28952ea2-405f-4451-ba01-96f0d1c5ff80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:19Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:19 crc kubenswrapper[4849]: E1209 11:28:19.309153 4849 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.311441 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.311487 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.311498 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.311513 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.311524 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:19Z","lastTransitionTime":"2025-12-09T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.415126 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.415185 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.415200 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.415219 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.415231 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:19Z","lastTransitionTime":"2025-12-09T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.519016 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.519095 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.519109 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.519130 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.519143 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:19Z","lastTransitionTime":"2025-12-09T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.535837 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:19 crc kubenswrapper[4849]: E1209 11:28:19.536019 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.622749 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.622795 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.622806 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.622822 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.622832 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:19Z","lastTransitionTime":"2025-12-09T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.725443 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.725480 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.725490 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.725514 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.725525 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:19Z","lastTransitionTime":"2025-12-09T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.827650 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.827693 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.827706 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.827722 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.827731 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:19Z","lastTransitionTime":"2025-12-09T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.931075 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.931138 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.931155 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.931179 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:19 crc kubenswrapper[4849]: I1209 11:28:19.931195 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:19Z","lastTransitionTime":"2025-12-09T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.033850 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.033899 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.033920 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.033948 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.033968 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:20Z","lastTransitionTime":"2025-12-09T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.136641 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.136697 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.136737 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.136762 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.136778 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:20Z","lastTransitionTime":"2025-12-09T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.238871 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.238956 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.238986 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.239014 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.239037 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:20Z","lastTransitionTime":"2025-12-09T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.341982 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.342073 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.342091 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.342117 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.342136 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:20Z","lastTransitionTime":"2025-12-09T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.444971 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.445246 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.445321 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.445381 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.445501 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:20Z","lastTransitionTime":"2025-12-09T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.536193 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:20 crc kubenswrapper[4849]: E1209 11:28:20.537053 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.536440 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:20 crc kubenswrapper[4849]: E1209 11:28:20.537234 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.536198 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:20 crc kubenswrapper[4849]: E1209 11:28:20.537498 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.547383 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.547557 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.547675 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.547767 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.547859 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:20Z","lastTransitionTime":"2025-12-09T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.650311 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.650351 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.650362 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.650379 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.650390 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:20Z","lastTransitionTime":"2025-12-09T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.753323 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.753363 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.753376 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.753392 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.753403 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:20Z","lastTransitionTime":"2025-12-09T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.857637 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.857718 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.857729 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.857750 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.857763 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:20Z","lastTransitionTime":"2025-12-09T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.961161 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.961599 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.961752 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.961840 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:20 crc kubenswrapper[4849]: I1209 11:28:20.961912 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:20Z","lastTransitionTime":"2025-12-09T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.065102 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.065168 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.065194 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.065222 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.065242 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:21Z","lastTransitionTime":"2025-12-09T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.168523 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.168568 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.168581 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.168601 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.168614 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:21Z","lastTransitionTime":"2025-12-09T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.271607 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.271999 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.272221 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.272387 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.272609 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:21Z","lastTransitionTime":"2025-12-09T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.375512 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.375581 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.375595 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.375617 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.375629 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:21Z","lastTransitionTime":"2025-12-09T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.478142 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.478179 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.478189 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.478206 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.478218 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:21Z","lastTransitionTime":"2025-12-09T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.535581 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:21 crc kubenswrapper[4849]: E1209 11:28:21.535813 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.580531 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.580598 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.580611 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.580633 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.580650 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:21Z","lastTransitionTime":"2025-12-09T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.683467 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.683495 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.683504 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.683518 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.683529 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:21Z","lastTransitionTime":"2025-12-09T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.786643 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.786696 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.786706 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.786722 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.786731 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:21Z","lastTransitionTime":"2025-12-09T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.889863 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.889904 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.889913 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.889927 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.889988 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:21Z","lastTransitionTime":"2025-12-09T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.992529 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.992570 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.992581 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.992596 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:21 crc kubenswrapper[4849]: I1209 11:28:21.992608 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:21Z","lastTransitionTime":"2025-12-09T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.095092 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.095119 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.095128 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.095141 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.095149 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:22Z","lastTransitionTime":"2025-12-09T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.197973 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.198019 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.198031 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.198046 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.198059 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:22Z","lastTransitionTime":"2025-12-09T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.302085 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.302163 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.302174 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.302198 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.302259 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:22Z","lastTransitionTime":"2025-12-09T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.405502 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.405536 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.405545 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.405558 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.405595 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:22Z","lastTransitionTime":"2025-12-09T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.508384 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.508438 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.508448 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.508462 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.508473 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:22Z","lastTransitionTime":"2025-12-09T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.522005 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.522256 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:26.522234709 +0000 UTC m=+149.062119025 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.536009 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.536009 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.536182 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.536152 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.536730 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.536907 4849 scope.go:117] "RemoveContainer" containerID="6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28" Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.537031 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.610816 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.610847 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.610857 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.610872 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.610882 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:22Z","lastTransitionTime":"2025-12-09T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.624716 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.624799 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.624825 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.624847 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.625157 4849 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.625304 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.625365 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.625379 4849 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.625451 4849 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.625666 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:29:26.625294738 +0000 UTC m=+149.165179054 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.625820 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 11:29:26.625800571 +0000 UTC m=+149.165684917 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.625960 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 11:29:26.625947135 +0000 UTC m=+149.165831511 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.626207 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.626227 4849 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.626237 4849 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:28:22 crc kubenswrapper[4849]: E1209 11:28:22.626271 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 11:29:26.626254974 +0000 UTC m=+149.166139300 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.714521 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.714892 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.714906 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.714926 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.714938 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:22Z","lastTransitionTime":"2025-12-09T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.816517 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.816545 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.816555 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.816568 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.816576 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:22Z","lastTransitionTime":"2025-12-09T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.918867 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.918894 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.918904 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.918916 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:22 crc kubenswrapper[4849]: I1209 11:28:22.918926 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:22Z","lastTransitionTime":"2025-12-09T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.021622 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.021659 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.021672 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.021688 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.021699 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:23Z","lastTransitionTime":"2025-12-09T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.124135 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.124174 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.124184 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.124198 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.124209 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:23Z","lastTransitionTime":"2025-12-09T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.207736 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/2.log" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.210178 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerStarted","Data":"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9"} Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.210684 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.226734 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.226781 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.226791 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.226808 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.226821 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:23Z","lastTransitionTime":"2025-12-09T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.228465 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.246701 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.261764 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.287321 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:56Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093363 6451 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1209 11:27:56.093677 6451 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093984 6451 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:56.094075 6451 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 11:27:56.094150 6451 factory.go:656] Stopping watch factory\\\\nI1209 11:27:56.094167 6451 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 11:27:56.098002 6451 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1209 11:27:56.098055 6451 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1209 11:27:56.098121 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1209 11:27:56.098164 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 11:27:56.098250 6451 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.302826 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd5aaf6a-290c-4907-9138-e72fb2d70d47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63eb80d6ef78c44cac4d693ead4c3ba27c4a52a859347f8a1880d460aa03a7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://912e2384686e0ec62b9fa35a44eac781a123ce25d7966176317b63aef74dd153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d9ddc776af8966326e6ee92251b4a127247af456fabe67cf9c86a6cc2d4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.316576 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.330134 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.330180 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.330192 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.330211 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.330223 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:23Z","lastTransitionTime":"2025-12-09T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.343515 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.355181 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.366149 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.395224 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.410436 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.426625 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.437195 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.437439 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.437604 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.437707 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.437788 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:23Z","lastTransitionTime":"2025-12-09T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.439498 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.458988 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.518815 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.536337 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:23 crc kubenswrapper[4849]: E1209 11:28:23.536501 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.547857 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.547887 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.547895 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.547908 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.547917 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:23Z","lastTransitionTime":"2025-12-09T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.571179 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.594021 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954600766ab4dd73fd7ff676e1ff4e6e53acdc03033e3f96d03582f2b268e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"2025-12-09T11:27:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7c02c98a-3fbd-42da-a57d-046eea25533f\\\\n2025-12-09T11:27:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7c02c98a-3fbd-42da-a57d-046eea25533f to /host/opt/cni/bin/\\\\n2025-12-09T11:27:24Z [verbose] multus-daemon started\\\\n2025-12-09T11:27:24Z [verbose] Readiness Indicator file check\\\\n2025-12-09T11:28:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.606689 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:23Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.656446 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.656694 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.656962 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.657080 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.657173 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:23Z","lastTransitionTime":"2025-12-09T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.761583 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.761622 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.761632 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.761648 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.761660 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:23Z","lastTransitionTime":"2025-12-09T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.864920 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.865495 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.865675 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.865828 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.865985 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:23Z","lastTransitionTime":"2025-12-09T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.968727 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.968946 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.969169 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.969268 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:23 crc kubenswrapper[4849]: I1209 11:28:23.969329 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:23Z","lastTransitionTime":"2025-12-09T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.072120 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.072179 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.072196 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.072220 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.072236 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:24Z","lastTransitionTime":"2025-12-09T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.174831 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.174873 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.174885 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.174902 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.174915 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:24Z","lastTransitionTime":"2025-12-09T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.277019 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.277079 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.277093 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.277107 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.277117 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:24Z","lastTransitionTime":"2025-12-09T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.379653 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.379716 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.379727 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.379745 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.379757 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:24Z","lastTransitionTime":"2025-12-09T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.482581 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.482631 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.482642 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.482658 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.482669 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:24Z","lastTransitionTime":"2025-12-09T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.536001 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.536092 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.536194 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:24 crc kubenswrapper[4849]: E1209 11:28:24.536186 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:24 crc kubenswrapper[4849]: E1209 11:28:24.536383 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:24 crc kubenswrapper[4849]: E1209 11:28:24.536505 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.549677 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.584270 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.584304 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.584315 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.584347 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.584357 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:24Z","lastTransitionTime":"2025-12-09T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.687532 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.687605 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.687627 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.687658 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.687681 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:24Z","lastTransitionTime":"2025-12-09T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.789704 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.790000 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.790111 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.790182 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.790245 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:24Z","lastTransitionTime":"2025-12-09T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.892327 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.892365 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.892376 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.892390 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.892400 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:24Z","lastTransitionTime":"2025-12-09T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.994865 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.994905 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.994916 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.994931 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:24 crc kubenswrapper[4849]: I1209 11:28:24.994942 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:24Z","lastTransitionTime":"2025-12-09T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.097577 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.097621 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.097633 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.097649 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.097661 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:25Z","lastTransitionTime":"2025-12-09T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.200336 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.200376 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.200386 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.200398 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.200427 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:25Z","lastTransitionTime":"2025-12-09T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.224126 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/3.log" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.224715 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/2.log" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.227306 4849 generic.go:334] "Generic (PLEG): container finished" podID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerID="780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9" exitCode=1 Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.227913 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerDied","Data":"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9"} Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.227961 4849 scope.go:117] "RemoveContainer" containerID="6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.228979 4849 scope.go:117] "RemoveContainer" containerID="780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9" Dec 09 11:28:25 crc kubenswrapper[4849]: E1209 11:28:25.229137 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6hf97_openshift-ovn-kubernetes(205e41c5-82b8-4bac-a27a-49f1e0da94e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.244957 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.262919 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.273852 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.299669 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.302464 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.302523 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.302534 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.302549 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.302558 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:25Z","lastTransitionTime":"2025-12-09T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.315282 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f142294a-137a-456f-9d4d-3608af79abeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b3baa60b27e9426c2fb55a15e56f8654b7037f032afe6070615e90e2d687856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1dd80f15cbed4c2d519630e12e998f03eee1a516b8548f692ba67b63f79810\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0bc6b3852c040d17f37e3d3b627b9f6fa6f4ab34a6cb6e6b6a18da94d4417d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.331013 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.346498 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h76bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5c6e29f-6131-4daa-b297-81eb53e7384c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954600766ab4dd73fd7ff676e1ff4e6e53acdc03033e3f96d03582f2b268e54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:28:09Z\\\",\\\"message\\\":\\\"2025-12-09T11:27:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7c02c98a-3fbd-42da-a57d-046eea25533f\\\\n2025-12-09T11:27:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7c02c98a-3fbd-42da-a57d-046eea25533f to /host/opt/cni/bin/\\\\n2025-12-09T11:27:24Z [verbose] multus-daemon started\\\\n2025-12-09T11:27:24Z [verbose] Readiness Indicator file check\\\\n2025-12-09T11:28:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfnlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h76bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.359221 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92bfd32-e3db-4e27-a677-1661aad91e1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab669422a47aa2d44e9a56079d63059402a8de662528a396bdf26acf55da7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://995bc39ce9c3e066c4eb39a316f868097f096e10394b27aebcf39e9caa5d0ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mg9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n9ndf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.371746 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc2b5fc-6215-40c4-910c-0f9595b9a45e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cbe5127dbb2a26b2683200bbda46e462673e98eb672667e624dc0d1f1058d7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ff15c84f80699e723bb08920d3ba539111947258b61611d74c4158714af446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90ff15c84f80699e723bb08920d3ba539111947258b61611d74c4158714af446\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.384038 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec7a78a9-b507-4a06-98c1-50d9390c6a72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 11:27:18.505791 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 11:27:18.505950 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 11:27:18.507148 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1580318386/tls.crt::/tmp/serving-cert-1580318386/tls.key\\\\\\\"\\\\nI1209 11:27:18.973836 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 11:27:18.979099 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 11:27:18.979127 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 11:27:18.979219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 11:27:18.979227 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 11:27:18.983793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 11:27:18.983852 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983857 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 11:27:18.983863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 11:27:18.983866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 11:27:18.983869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 11:27:18.983871 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 11:27:18.983814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 11:27:18.985640 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.397339 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4505c2ff3a60d26d536c0620144787bded4ae672f4dc5bdcec200b53c0bfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b651700a0a9eb15cf94ee11a2eba39b4b7233343cf6315b6b14e08882d1e1447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.404520 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.404576 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.404589 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.404606 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.404616 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:25Z","lastTransitionTime":"2025-12-09T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.407941 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"157c6f6c-042b-4da3-934e-a08474e56486\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ed77a9302433b31194d3ce1c01e8eeea5744f7f140af80a6a09c81c1966e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67zr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-89kpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.426877 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"205e41c5-82b8-4bac-a27a-49f1e0da94e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e216f96eac9402d90558b1e6a73d4c9438695b59abf069638699af6c8976d28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:27:56Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093363 6451 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1209 11:27:56.093677 6451 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 11:27:56.093984 6451 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:27:56.094075 6451 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 11:27:56.094150 6451 factory.go:656] Stopping watch factory\\\\nI1209 11:27:56.094167 6451 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 11:27:56.098002 6451 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1209 11:27:56.098055 6451 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1209 11:27:56.098121 6451 ovnkube.go:599] Stopped ovnkube\\\\nI1209 11:27:56.098164 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 11:27:56.098250 6451 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T11:28:24Z\\\",\\\"message\\\":\\\"2172 6833 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:28:24.022228 6833 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:28:24.022459 6833 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:28:24.022628 6833 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 11:28:24.026038 6833 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 11:28:24.026079 6833 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 11:28:24.026115 6833 factory.go:656] Stopping watch factory\\\\nI1209 11:28:24.026136 6833 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 11:28:24.026145 6833 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 11:28:24.046086 6833 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1209 11:28:24.046119 6833 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1209 11:28:24.046178 6833 ovnkube.go:599] Stopped ovnkube\\\\nI1209 11:28:24.046209 6833 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 11:28:24.046296 6833 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T11:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jm22\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6hf97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.441105 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.453534 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd5aaf6a-290c-4907-9138-e72fb2d70d47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63eb80d6ef78c44cac4d693ead4c3ba27c4a52a859347f8a1880d460aa03a7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://912e2384686e0ec62b9fa35a44eac781a123ce25d7966176317b63aef74dd153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d9ddc776af8966326e6ee92251b4a127247af456fabe67cf9c86a6cc2d4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.464205 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.475592 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.486539 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.500165 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:25Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.507182 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.507230 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.507243 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.507262 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.507279 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:25Z","lastTransitionTime":"2025-12-09T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.536454 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:25 crc kubenswrapper[4849]: E1209 11:28:25.536634 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.609718 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.609762 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.609775 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.609793 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.609805 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:25Z","lastTransitionTime":"2025-12-09T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.712739 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.712783 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.712796 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.712814 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.712827 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:25Z","lastTransitionTime":"2025-12-09T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.815558 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.815596 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.815607 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.815621 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.815631 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:25Z","lastTransitionTime":"2025-12-09T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.917670 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.917720 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.917734 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.917750 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:25 crc kubenswrapper[4849]: I1209 11:28:25.917762 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:25Z","lastTransitionTime":"2025-12-09T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.021292 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.021348 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.021369 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.021396 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.021457 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:26Z","lastTransitionTime":"2025-12-09T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.124176 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.124213 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.124224 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.124240 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.124254 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:26Z","lastTransitionTime":"2025-12-09T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.227124 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.227180 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.227190 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.227203 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.227212 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:26Z","lastTransitionTime":"2025-12-09T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.230789 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/3.log" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.330101 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.330368 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.330495 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.330587 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.330670 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:26Z","lastTransitionTime":"2025-12-09T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.433524 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.433767 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.433881 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.433965 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.434044 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:26Z","lastTransitionTime":"2025-12-09T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.535799 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.535839 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.535976 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:26 crc kubenswrapper[4849]: E1209 11:28:26.536219 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:26 crc kubenswrapper[4849]: E1209 11:28:26.536211 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:26 crc kubenswrapper[4849]: E1209 11:28:26.536622 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.538378 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.538561 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.538684 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.538801 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.538922 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:26Z","lastTransitionTime":"2025-12-09T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.641916 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.642269 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.642446 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.642601 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.642735 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:26Z","lastTransitionTime":"2025-12-09T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.745741 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.745791 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.745803 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.745821 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.745835 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:26Z","lastTransitionTime":"2025-12-09T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.848361 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.848397 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.848427 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.848443 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.848456 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:26Z","lastTransitionTime":"2025-12-09T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.951052 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.951108 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.951123 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.951138 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:26 crc kubenswrapper[4849]: I1209 11:28:26.951148 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:26Z","lastTransitionTime":"2025-12-09T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.053567 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.053612 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.053624 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.053640 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.053652 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:27Z","lastTransitionTime":"2025-12-09T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.155808 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.155866 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.155877 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.155894 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.155904 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:27Z","lastTransitionTime":"2025-12-09T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.258565 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.258600 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.258612 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.258626 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.258637 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:27Z","lastTransitionTime":"2025-12-09T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.361447 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.361510 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.361522 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.361537 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.361547 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:27Z","lastTransitionTime":"2025-12-09T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.464472 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.464546 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.464556 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.464570 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.464580 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:27Z","lastTransitionTime":"2025-12-09T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.536111 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:27 crc kubenswrapper[4849]: E1209 11:28:27.536252 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.566870 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.566909 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.566926 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.566944 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.566955 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:27Z","lastTransitionTime":"2025-12-09T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.669563 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.669644 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.669674 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.669706 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.669727 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:27Z","lastTransitionTime":"2025-12-09T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.772927 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.773007 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.773015 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.773030 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.773040 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:27Z","lastTransitionTime":"2025-12-09T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.875818 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.875860 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.875874 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.875889 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.875906 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:27Z","lastTransitionTime":"2025-12-09T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.978444 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.978480 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.978496 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.978513 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:27 crc kubenswrapper[4849]: I1209 11:28:27.978524 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:27Z","lastTransitionTime":"2025-12-09T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.081189 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.081228 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.081242 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.081257 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.081268 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:28Z","lastTransitionTime":"2025-12-09T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.184021 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.184071 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.184083 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.184097 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.184109 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:28Z","lastTransitionTime":"2025-12-09T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.287722 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.287772 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.287783 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.287798 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.287806 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:28Z","lastTransitionTime":"2025-12-09T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.389799 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.389827 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.389835 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.389846 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.389854 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:28Z","lastTransitionTime":"2025-12-09T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.492211 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.492251 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.492262 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.492277 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.492289 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:28Z","lastTransitionTime":"2025-12-09T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.536185 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.536281 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.536349 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:28 crc kubenswrapper[4849]: E1209 11:28:28.536344 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:28 crc kubenswrapper[4849]: E1209 11:28:28.536452 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:28 crc kubenswrapper[4849]: E1209 11:28:28.536537 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.548720 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qcffq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa5f421b-d486-4b0d-a615-7887df025c00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k84jm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qcffq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.561283 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd5aaf6a-290c-4907-9138-e72fb2d70d47\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63eb80d6ef78c44cac4d693ead4c3ba27c4a52a859347f8a1880d460aa03a7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://912e2384686e0ec62b9fa35a44eac781a123ce25d7966176317b63aef74dd153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d9ddc776af8966326e6ee92251b4a127247af456fabe67cf9c86a6cc2d4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba1b10f7dff70d29bb0e11e28154184aeaa3643f9070781696140451a4502239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.572069 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lpj4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d4c399a-d447-4219-9a6f-dcfcb77c7a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94070b067c942c455f8e21efb3c940f57020fc46ead92b906900addcd564d95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fh69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lpj4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.587718 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.594793 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.594834 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.594843 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.594855 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.594863 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:28Z","lastTransitionTime":"2025-12-09T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.602365 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.616220 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d315f9f03740b1286c79501758fbf22251e4688c4267086c34bfd0a6da636c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.630688 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab7c97ac9a8e9e1707aae0172c1a5fbb584168b24705a5ad836976fe347b2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.653623 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de61302b-e1bc-4372-8485-36b4fde18e80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb0358d2f808dd9d4343516e456a887942e94b985a4f338e1f6a0c11ca7da35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e67dafc1191402e3ad91cca0b7bdb3bf1dc8e7b6fd6d752119f621d64f30660a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac62362338bfbe810ce288cb196565a29515274c3e0360867814cb01e504b53b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f32b5f62e6f37df7389edb7690aabf5365fba39885ff701db197194709bebdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59d718d3148aac30f1c14f657bcfe60b6a05b6f8ddfdc9da40148705a9235c10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb353b18485170dccb990dd03d3732b9904f957196dfe9712f7ad9e990b420cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb34a64a8c24e63c761b80960f8010f28d007c22b95773edb9d083be1c982f25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grflc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lwsgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.666074 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qrt6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe9f884-b4dd-4a85-8554-ad36d1ab3b69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e12d795126154781adb0fb9fecab8c31b2e73e3f9b75be3dde92f9e28d9c3d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxjpc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:27:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qrt6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.692707 4849 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eed47f40-f82b-4437-986a-5c2b72ab693a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T11:26:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec50cd211069c2c1a14404acfe68611fdd53721a4a23dbe1aa690587ef6c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca74ac69e0baac7664bbc786f9b4dd29e72e9d753a1a65de4c382c7c7d0e5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cae3eddf0046e37288ec2693092cce907501e00ec9a875299762e84d75e7392\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d26e8cb2a930ae5fb8f968fe63e61c9f71ac6910752c0884032decdd87048c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d317a9c1e5cdcf18420bf9f27400fdb936b9b19e1a547b41971fe6621ee7935e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T11:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d33e229103e6a97fcf82df387276acc7450832fa311f0247a2db8830447a3bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e628377a6e92bfdc0807e9713266daf06c95621b459b1e0f3dc23ec25e4143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad19d3aace669dc9490c6f3a07445684792a5843333e325b4bfedf3aac286e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T11:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T11:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T11:26:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T11:28:28Z is after 2025-08-24T17:21:41Z" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.697704 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.697735 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.697746 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.697765 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.697778 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:28Z","lastTransitionTime":"2025-12-09T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.734533 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=63.734516934 podStartE2EDuration="1m3.734516934s" podCreationTimestamp="2025-12-09 11:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:28:28.718538664 +0000 UTC m=+91.258423000" watchObservedRunningTime="2025-12-09 11:28:28.734516934 +0000 UTC m=+91.274401250" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.748799 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-h76bl" podStartSLOduration=69.748780017 podStartE2EDuration="1m9.748780017s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:28:28.748313074 +0000 UTC m=+91.288197390" watchObservedRunningTime="2025-12-09 11:28:28.748780017 +0000 UTC m=+91.288664333" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.760896 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n9ndf" podStartSLOduration=68.7608768 podStartE2EDuration="1m8.7608768s" podCreationTimestamp="2025-12-09 11:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:28:28.760270504 +0000 UTC m=+91.300154830" watchObservedRunningTime="2025-12-09 11:28:28.7608768 +0000 UTC m=+91.300761126" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.795925 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.795907525 podStartE2EDuration="1m9.795907525s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:28:28.795698789 +0000 UTC m=+91.335583115" watchObservedRunningTime="2025-12-09 11:28:28.795907525 +0000 UTC m=+91.335791841" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.796202 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=4.796195262 podStartE2EDuration="4.796195262s" podCreationTimestamp="2025-12-09 11:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:28:28.774619809 +0000 UTC m=+91.314504125" watchObservedRunningTime="2025-12-09 11:28:28.796195262 +0000 UTC m=+91.336079578" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.800803 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.800844 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.800855 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.800872 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.800884 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:28Z","lastTransitionTime":"2025-12-09T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.832033 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podStartSLOduration=69.831987658 podStartE2EDuration="1m9.831987658s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:28:28.831783283 +0000 UTC m=+91.371667619" watchObservedRunningTime="2025-12-09 11:28:28.831987658 +0000 UTC m=+91.371871974" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.903322 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.903361 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.903371 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.903386 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:28 crc kubenswrapper[4849]: I1209 11:28:28.903401 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:28Z","lastTransitionTime":"2025-12-09T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.006291 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.006356 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.006367 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.006386 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.006396 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:29Z","lastTransitionTime":"2025-12-09T11:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.108740 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.108777 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.108789 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.108805 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.108817 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:29Z","lastTransitionTime":"2025-12-09T11:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.210686 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.210715 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.210725 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.210737 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.210745 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:29Z","lastTransitionTime":"2025-12-09T11:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.313773 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.313868 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.313893 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.313925 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.313961 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:29Z","lastTransitionTime":"2025-12-09T11:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.406978 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.407713 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.408049 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.408077 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.408358 4849 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T11:28:29Z","lastTransitionTime":"2025-12-09T11:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.464559 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj"] Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.465099 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.467555 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.467705 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.469257 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.469512 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.499790 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lwsgz" podStartSLOduration=70.499766906 podStartE2EDuration="1m10.499766906s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:28:29.499378774 +0000 UTC m=+92.039263100" watchObservedRunningTime="2025-12-09 11:28:29.499766906 +0000 UTC m=+92.039651262" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.535725 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:29 crc kubenswrapper[4849]: E1209 11:28:29.535855 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.542286 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=70.542271566 podStartE2EDuration="1m10.542271566s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:28:29.54134177 +0000 UTC m=+92.081226086" watchObservedRunningTime="2025-12-09 11:28:29.542271566 +0000 UTC m=+92.082155882" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.542841 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qrt6l" podStartSLOduration=71.542835661 podStartE2EDuration="1m11.542835661s" podCreationTimestamp="2025-12-09 11:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:28:29.512515346 +0000 UTC m=+92.052399672" watchObservedRunningTime="2025-12-09 11:28:29.542835661 +0000 UTC m=+92.082719977" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.562876 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.562860673 podStartE2EDuration="35.562860673s" podCreationTimestamp="2025-12-09 11:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:28:29.562153063 +0000 UTC m=+92.102037379" watchObservedRunningTime="2025-12-09 11:28:29.562860673 +0000 UTC m=+92.102744989" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.590115 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-lpj4f" podStartSLOduration=72.590098282 podStartE2EDuration="1m12.590098282s" podCreationTimestamp="2025-12-09 11:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:28:29.577372812 +0000 UTC m=+92.117257128" watchObservedRunningTime="2025-12-09 11:28:29.590098282 +0000 UTC m=+92.129982598" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.601888 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3f8cf67f-9401-40cc-85d7-83bbde159811-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qtchj\" (UID: \"3f8cf67f-9401-40cc-85d7-83bbde159811\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.601936 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f8cf67f-9401-40cc-85d7-83bbde159811-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qtchj\" (UID: \"3f8cf67f-9401-40cc-85d7-83bbde159811\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.601989 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3f8cf67f-9401-40cc-85d7-83bbde159811-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qtchj\" (UID: \"3f8cf67f-9401-40cc-85d7-83bbde159811\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.602139 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f8cf67f-9401-40cc-85d7-83bbde159811-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qtchj\" (UID: \"3f8cf67f-9401-40cc-85d7-83bbde159811\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.602230 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f8cf67f-9401-40cc-85d7-83bbde159811-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qtchj\" (UID: \"3f8cf67f-9401-40cc-85d7-83bbde159811\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.703649 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f8cf67f-9401-40cc-85d7-83bbde159811-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qtchj\" (UID: \"3f8cf67f-9401-40cc-85d7-83bbde159811\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.703707 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3f8cf67f-9401-40cc-85d7-83bbde159811-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qtchj\" (UID: \"3f8cf67f-9401-40cc-85d7-83bbde159811\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.703728 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f8cf67f-9401-40cc-85d7-83bbde159811-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qtchj\" (UID: \"3f8cf67f-9401-40cc-85d7-83bbde159811\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.703746 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f8cf67f-9401-40cc-85d7-83bbde159811-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qtchj\" (UID: \"3f8cf67f-9401-40cc-85d7-83bbde159811\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.703780 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3f8cf67f-9401-40cc-85d7-83bbde159811-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qtchj\" (UID: \"3f8cf67f-9401-40cc-85d7-83bbde159811\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.703830 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3f8cf67f-9401-40cc-85d7-83bbde159811-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qtchj\" (UID: \"3f8cf67f-9401-40cc-85d7-83bbde159811\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.703820 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3f8cf67f-9401-40cc-85d7-83bbde159811-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qtchj\" (UID: \"3f8cf67f-9401-40cc-85d7-83bbde159811\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.705185 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f8cf67f-9401-40cc-85d7-83bbde159811-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qtchj\" (UID: \"3f8cf67f-9401-40cc-85d7-83bbde159811\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.715592 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f8cf67f-9401-40cc-85d7-83bbde159811-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qtchj\" (UID: \"3f8cf67f-9401-40cc-85d7-83bbde159811\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.726365 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f8cf67f-9401-40cc-85d7-83bbde159811-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qtchj\" (UID: \"3f8cf67f-9401-40cc-85d7-83bbde159811\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:29 crc kubenswrapper[4849]: I1209 11:28:29.780588 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" Dec 09 11:28:30 crc kubenswrapper[4849]: I1209 11:28:30.248822 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" event={"ID":"3f8cf67f-9401-40cc-85d7-83bbde159811","Type":"ContainerStarted","Data":"2e2875c59471b83fe59c6dae95bf97ad5d1409e20ad40e6b1b63b51ae23d47bb"} Dec 09 11:28:30 crc kubenswrapper[4849]: I1209 11:28:30.249458 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" event={"ID":"3f8cf67f-9401-40cc-85d7-83bbde159811","Type":"ContainerStarted","Data":"a01855300dd7f3ab3231da28dee9c6911f0a9a4ddebe6eaaf0a29d3ee1905e1b"} Dec 09 11:28:30 crc kubenswrapper[4849]: I1209 11:28:30.273727 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtchj" podStartSLOduration=71.273698135 podStartE2EDuration="1m11.273698135s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:28:30.271336991 +0000 UTC m=+92.811221327" watchObservedRunningTime="2025-12-09 11:28:30.273698135 +0000 UTC m=+92.813582541" Dec 09 11:28:30 crc kubenswrapper[4849]: I1209 11:28:30.536000 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:30 crc kubenswrapper[4849]: E1209 11:28:30.536157 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:30 crc kubenswrapper[4849]: I1209 11:28:30.536522 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:30 crc kubenswrapper[4849]: I1209 11:28:30.536589 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:30 crc kubenswrapper[4849]: E1209 11:28:30.537061 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:30 crc kubenswrapper[4849]: E1209 11:28:30.537105 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:31 crc kubenswrapper[4849]: I1209 11:28:31.535679 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:31 crc kubenswrapper[4849]: E1209 11:28:31.535810 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:32 crc kubenswrapper[4849]: I1209 11:28:32.535574 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:32 crc kubenswrapper[4849]: I1209 11:28:32.535585 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:32 crc kubenswrapper[4849]: E1209 11:28:32.535718 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:32 crc kubenswrapper[4849]: I1209 11:28:32.535806 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:32 crc kubenswrapper[4849]: E1209 11:28:32.535965 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:32 crc kubenswrapper[4849]: E1209 11:28:32.536313 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:33 crc kubenswrapper[4849]: I1209 11:28:33.535614 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:33 crc kubenswrapper[4849]: E1209 11:28:33.535754 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:34 crc kubenswrapper[4849]: I1209 11:28:34.536258 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:34 crc kubenswrapper[4849]: I1209 11:28:34.536258 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:34 crc kubenswrapper[4849]: E1209 11:28:34.537042 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:34 crc kubenswrapper[4849]: E1209 11:28:34.537104 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:34 crc kubenswrapper[4849]: I1209 11:28:34.536331 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:34 crc kubenswrapper[4849]: E1209 11:28:34.537188 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:35 crc kubenswrapper[4849]: I1209 11:28:35.536379 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:35 crc kubenswrapper[4849]: E1209 11:28:35.536654 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:36 crc kubenswrapper[4849]: I1209 11:28:36.535913 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:36 crc kubenswrapper[4849]: I1209 11:28:36.536107 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:36 crc kubenswrapper[4849]: I1209 11:28:36.536346 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:36 crc kubenswrapper[4849]: E1209 11:28:36.536468 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:36 crc kubenswrapper[4849]: E1209 11:28:36.536284 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:36 crc kubenswrapper[4849]: E1209 11:28:36.536777 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:37 crc kubenswrapper[4849]: I1209 11:28:37.535561 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:37 crc kubenswrapper[4849]: E1209 11:28:37.535670 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:37 crc kubenswrapper[4849]: I1209 11:28:37.792230 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs\") pod \"network-metrics-daemon-qcffq\" (UID: \"fa5f421b-d486-4b0d-a615-7887df025c00\") " pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:37 crc kubenswrapper[4849]: E1209 11:28:37.792340 4849 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:28:37 crc kubenswrapper[4849]: E1209 11:28:37.792390 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs podName:fa5f421b-d486-4b0d-a615-7887df025c00 nodeName:}" failed. No retries permitted until 2025-12-09 11:29:41.792377191 +0000 UTC m=+164.332261507 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs") pod "network-metrics-daemon-qcffq" (UID: "fa5f421b-d486-4b0d-a615-7887df025c00") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 11:28:38 crc kubenswrapper[4849]: I1209 11:28:38.536237 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:38 crc kubenswrapper[4849]: E1209 11:28:38.537693 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:38 crc kubenswrapper[4849]: I1209 11:28:38.537740 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:38 crc kubenswrapper[4849]: I1209 11:28:38.537802 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:38 crc kubenswrapper[4849]: E1209 11:28:38.538610 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:38 crc kubenswrapper[4849]: I1209 11:28:38.538829 4849 scope.go:117] "RemoveContainer" containerID="780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9" Dec 09 11:28:38 crc kubenswrapper[4849]: E1209 11:28:38.539153 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:38 crc kubenswrapper[4849]: E1209 11:28:38.539229 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6hf97_openshift-ovn-kubernetes(205e41c5-82b8-4bac-a27a-49f1e0da94e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" Dec 09 11:28:39 crc kubenswrapper[4849]: I1209 11:28:39.535482 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:39 crc kubenswrapper[4849]: E1209 11:28:39.535710 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:40 crc kubenswrapper[4849]: I1209 11:28:40.535968 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:40 crc kubenswrapper[4849]: I1209 11:28:40.536033 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:40 crc kubenswrapper[4849]: I1209 11:28:40.535968 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:40 crc kubenswrapper[4849]: E1209 11:28:40.536173 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:40 crc kubenswrapper[4849]: E1209 11:28:40.536252 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:40 crc kubenswrapper[4849]: E1209 11:28:40.536347 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:41 crc kubenswrapper[4849]: I1209 11:28:41.536211 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:41 crc kubenswrapper[4849]: E1209 11:28:41.536384 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:42 crc kubenswrapper[4849]: I1209 11:28:42.536214 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:42 crc kubenswrapper[4849]: I1209 11:28:42.536209 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:42 crc kubenswrapper[4849]: E1209 11:28:42.536360 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:42 crc kubenswrapper[4849]: E1209 11:28:42.536621 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:42 crc kubenswrapper[4849]: I1209 11:28:42.537255 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:42 crc kubenswrapper[4849]: E1209 11:28:42.537542 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:43 crc kubenswrapper[4849]: I1209 11:28:43.536339 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:43 crc kubenswrapper[4849]: E1209 11:28:43.536500 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:44 crc kubenswrapper[4849]: I1209 11:28:44.535511 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:44 crc kubenswrapper[4849]: I1209 11:28:44.535537 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:44 crc kubenswrapper[4849]: I1209 11:28:44.535512 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:44 crc kubenswrapper[4849]: E1209 11:28:44.535635 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:44 crc kubenswrapper[4849]: E1209 11:28:44.535677 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:44 crc kubenswrapper[4849]: E1209 11:28:44.535739 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:45 crc kubenswrapper[4849]: I1209 11:28:45.535935 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:45 crc kubenswrapper[4849]: E1209 11:28:45.536155 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:46 crc kubenswrapper[4849]: I1209 11:28:46.536514 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:46 crc kubenswrapper[4849]: I1209 11:28:46.536621 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:46 crc kubenswrapper[4849]: E1209 11:28:46.536713 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:46 crc kubenswrapper[4849]: E1209 11:28:46.536822 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:46 crc kubenswrapper[4849]: I1209 11:28:46.536950 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:46 crc kubenswrapper[4849]: E1209 11:28:46.537148 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:47 crc kubenswrapper[4849]: I1209 11:28:47.535684 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:47 crc kubenswrapper[4849]: E1209 11:28:47.536378 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:48 crc kubenswrapper[4849]: I1209 11:28:48.548037 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:48 crc kubenswrapper[4849]: I1209 11:28:48.536596 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:48 crc kubenswrapper[4849]: E1209 11:28:48.555723 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:48 crc kubenswrapper[4849]: I1209 11:28:48.555928 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:48 crc kubenswrapper[4849]: E1209 11:28:48.556087 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:48 crc kubenswrapper[4849]: E1209 11:28:48.556460 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:49 crc kubenswrapper[4849]: I1209 11:28:49.536196 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:49 crc kubenswrapper[4849]: E1209 11:28:49.536357 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:50 crc kubenswrapper[4849]: I1209 11:28:50.536462 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:50 crc kubenswrapper[4849]: I1209 11:28:50.536491 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:50 crc kubenswrapper[4849]: I1209 11:28:50.536604 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:50 crc kubenswrapper[4849]: E1209 11:28:50.537743 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:50 crc kubenswrapper[4849]: E1209 11:28:50.537883 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:50 crc kubenswrapper[4849]: E1209 11:28:50.538002 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:51 crc kubenswrapper[4849]: I1209 11:28:51.535427 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:51 crc kubenswrapper[4849]: E1209 11:28:51.535929 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:52 crc kubenswrapper[4849]: I1209 11:28:52.536224 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:52 crc kubenswrapper[4849]: I1209 11:28:52.536255 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:52 crc kubenswrapper[4849]: E1209 11:28:52.536537 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:52 crc kubenswrapper[4849]: E1209 11:28:52.536742 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:52 crc kubenswrapper[4849]: I1209 11:28:52.536255 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:52 crc kubenswrapper[4849]: E1209 11:28:52.536860 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:53 crc kubenswrapper[4849]: I1209 11:28:53.535542 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:53 crc kubenswrapper[4849]: E1209 11:28:53.535675 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:53 crc kubenswrapper[4849]: I1209 11:28:53.536744 4849 scope.go:117] "RemoveContainer" containerID="780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9" Dec 09 11:28:53 crc kubenswrapper[4849]: E1209 11:28:53.536981 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6hf97_openshift-ovn-kubernetes(205e41c5-82b8-4bac-a27a-49f1e0da94e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" Dec 09 11:28:54 crc kubenswrapper[4849]: I1209 11:28:54.536226 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:54 crc kubenswrapper[4849]: I1209 11:28:54.536368 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:54 crc kubenswrapper[4849]: E1209 11:28:54.536925 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:54 crc kubenswrapper[4849]: E1209 11:28:54.537070 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:54 crc kubenswrapper[4849]: I1209 11:28:54.537571 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:54 crc kubenswrapper[4849]: E1209 11:28:54.537825 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:55 crc kubenswrapper[4849]: I1209 11:28:55.535988 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:55 crc kubenswrapper[4849]: E1209 11:28:55.536213 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:56 crc kubenswrapper[4849]: I1209 11:28:56.535542 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:56 crc kubenswrapper[4849]: I1209 11:28:56.535580 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:56 crc kubenswrapper[4849]: I1209 11:28:56.535580 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:56 crc kubenswrapper[4849]: E1209 11:28:56.535749 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:56 crc kubenswrapper[4849]: E1209 11:28:56.536101 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:56 crc kubenswrapper[4849]: E1209 11:28:56.536204 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:57 crc kubenswrapper[4849]: I1209 11:28:57.341781 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h76bl_e5c6e29f-6131-4daa-b297-81eb53e7384c/kube-multus/1.log" Dec 09 11:28:57 crc kubenswrapper[4849]: I1209 11:28:57.342246 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h76bl_e5c6e29f-6131-4daa-b297-81eb53e7384c/kube-multus/0.log" Dec 09 11:28:57 crc kubenswrapper[4849]: I1209 11:28:57.342298 4849 generic.go:334] "Generic (PLEG): container finished" podID="e5c6e29f-6131-4daa-b297-81eb53e7384c" containerID="954600766ab4dd73fd7ff676e1ff4e6e53acdc03033e3f96d03582f2b268e54b" exitCode=1 Dec 09 11:28:57 crc kubenswrapper[4849]: I1209 11:28:57.342332 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h76bl" event={"ID":"e5c6e29f-6131-4daa-b297-81eb53e7384c","Type":"ContainerDied","Data":"954600766ab4dd73fd7ff676e1ff4e6e53acdc03033e3f96d03582f2b268e54b"} Dec 09 11:28:57 crc kubenswrapper[4849]: I1209 11:28:57.342367 4849 scope.go:117] "RemoveContainer" containerID="362e3a0128f49354875eae1318357f323d07d0f5a9ba3ca8350fb66420b9bd40" Dec 09 11:28:57 crc kubenswrapper[4849]: I1209 11:28:57.342867 4849 scope.go:117] "RemoveContainer" containerID="954600766ab4dd73fd7ff676e1ff4e6e53acdc03033e3f96d03582f2b268e54b" Dec 09 11:28:57 crc kubenswrapper[4849]: E1209 11:28:57.343054 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-h76bl_openshift-multus(e5c6e29f-6131-4daa-b297-81eb53e7384c)\"" pod="openshift-multus/multus-h76bl" podUID="e5c6e29f-6131-4daa-b297-81eb53e7384c" Dec 09 11:28:57 crc kubenswrapper[4849]: I1209 11:28:57.536148 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:57 crc kubenswrapper[4849]: E1209 11:28:57.536296 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:28:58 crc kubenswrapper[4849]: I1209 11:28:58.347256 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h76bl_e5c6e29f-6131-4daa-b297-81eb53e7384c/kube-multus/1.log" Dec 09 11:28:58 crc kubenswrapper[4849]: E1209 11:28:58.481630 4849 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 09 11:28:58 crc kubenswrapper[4849]: I1209 11:28:58.536165 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:28:58 crc kubenswrapper[4849]: I1209 11:28:58.540676 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:28:58 crc kubenswrapper[4849]: E1209 11:28:58.544713 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:28:58 crc kubenswrapper[4849]: I1209 11:28:58.544885 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:28:58 crc kubenswrapper[4849]: E1209 11:28:58.545308 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:28:58 crc kubenswrapper[4849]: E1209 11:28:58.545601 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:28:58 crc kubenswrapper[4849]: E1209 11:28:58.687112 4849 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 11:28:59 crc kubenswrapper[4849]: I1209 11:28:59.536507 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:28:59 crc kubenswrapper[4849]: E1209 11:28:59.536712 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:29:00 crc kubenswrapper[4849]: I1209 11:29:00.535789 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:29:00 crc kubenswrapper[4849]: I1209 11:29:00.535870 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:29:00 crc kubenswrapper[4849]: E1209 11:29:00.535968 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:29:00 crc kubenswrapper[4849]: I1209 11:29:00.535823 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:29:00 crc kubenswrapper[4849]: E1209 11:29:00.536121 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:29:00 crc kubenswrapper[4849]: E1209 11:29:00.536197 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:29:01 crc kubenswrapper[4849]: I1209 11:29:01.535711 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:29:01 crc kubenswrapper[4849]: E1209 11:29:01.536610 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:29:02 crc kubenswrapper[4849]: I1209 11:29:02.535599 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:29:02 crc kubenswrapper[4849]: I1209 11:29:02.535657 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:29:02 crc kubenswrapper[4849]: E1209 11:29:02.535757 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:29:02 crc kubenswrapper[4849]: E1209 11:29:02.535908 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:29:02 crc kubenswrapper[4849]: I1209 11:29:02.535959 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:29:02 crc kubenswrapper[4849]: E1209 11:29:02.536034 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:29:03 crc kubenswrapper[4849]: I1209 11:29:03.536068 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:29:03 crc kubenswrapper[4849]: E1209 11:29:03.536203 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:29:03 crc kubenswrapper[4849]: E1209 11:29:03.688853 4849 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 11:29:04 crc kubenswrapper[4849]: I1209 11:29:04.535828 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:29:04 crc kubenswrapper[4849]: E1209 11:29:04.535973 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:29:04 crc kubenswrapper[4849]: I1209 11:29:04.535852 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:29:04 crc kubenswrapper[4849]: E1209 11:29:04.536200 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:29:04 crc kubenswrapper[4849]: I1209 11:29:04.536743 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:29:04 crc kubenswrapper[4849]: E1209 11:29:04.536995 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:29:05 crc kubenswrapper[4849]: I1209 11:29:05.535900 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:29:05 crc kubenswrapper[4849]: E1209 11:29:05.536022 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:29:06 crc kubenswrapper[4849]: I1209 11:29:06.535462 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:29:06 crc kubenswrapper[4849]: I1209 11:29:06.535503 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:29:06 crc kubenswrapper[4849]: I1209 11:29:06.535537 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:29:06 crc kubenswrapper[4849]: E1209 11:29:06.535638 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:29:06 crc kubenswrapper[4849]: E1209 11:29:06.535866 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:29:06 crc kubenswrapper[4849]: E1209 11:29:06.536019 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:29:07 crc kubenswrapper[4849]: I1209 11:29:07.535435 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:29:07 crc kubenswrapper[4849]: E1209 11:29:07.536204 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:29:07 crc kubenswrapper[4849]: I1209 11:29:07.536373 4849 scope.go:117] "RemoveContainer" containerID="780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9" Dec 09 11:29:08 crc kubenswrapper[4849]: I1209 11:29:08.380788 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/3.log" Dec 09 11:29:08 crc kubenswrapper[4849]: I1209 11:29:08.384884 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerStarted","Data":"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602"} Dec 09 11:29:08 crc kubenswrapper[4849]: I1209 11:29:08.385289 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:29:08 crc kubenswrapper[4849]: I1209 11:29:08.422773 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podStartSLOduration=109.422758015 podStartE2EDuration="1m49.422758015s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:08.420207495 +0000 UTC m=+130.960091831" watchObservedRunningTime="2025-12-09 11:29:08.422758015 +0000 UTC m=+130.962642331" Dec 09 11:29:08 crc kubenswrapper[4849]: I1209 11:29:08.535511 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:29:08 crc kubenswrapper[4849]: I1209 11:29:08.535573 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:29:08 crc kubenswrapper[4849]: I1209 11:29:08.535578 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:29:08 crc kubenswrapper[4849]: E1209 11:29:08.536620 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:29:08 crc kubenswrapper[4849]: E1209 11:29:08.536684 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:29:08 crc kubenswrapper[4849]: E1209 11:29:08.536732 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:29:08 crc kubenswrapper[4849]: E1209 11:29:08.689526 4849 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 11:29:08 crc kubenswrapper[4849]: I1209 11:29:08.831231 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qcffq"] Dec 09 11:29:08 crc kubenswrapper[4849]: I1209 11:29:08.831344 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:29:08 crc kubenswrapper[4849]: E1209 11:29:08.831439 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:29:10 crc kubenswrapper[4849]: I1209 11:29:10.535683 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:29:10 crc kubenswrapper[4849]: I1209 11:29:10.535719 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:29:10 crc kubenswrapper[4849]: I1209 11:29:10.535719 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:29:10 crc kubenswrapper[4849]: I1209 11:29:10.535749 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:29:10 crc kubenswrapper[4849]: E1209 11:29:10.536349 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:29:10 crc kubenswrapper[4849]: E1209 11:29:10.536227 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:29:10 crc kubenswrapper[4849]: E1209 11:29:10.536097 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:29:10 crc kubenswrapper[4849]: E1209 11:29:10.536491 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:29:10 crc kubenswrapper[4849]: I1209 11:29:10.537083 4849 scope.go:117] "RemoveContainer" containerID="954600766ab4dd73fd7ff676e1ff4e6e53acdc03033e3f96d03582f2b268e54b" Dec 09 11:29:11 crc kubenswrapper[4849]: I1209 11:29:11.397689 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h76bl_e5c6e29f-6131-4daa-b297-81eb53e7384c/kube-multus/1.log" Dec 09 11:29:11 crc kubenswrapper[4849]: I1209 11:29:11.397747 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h76bl" event={"ID":"e5c6e29f-6131-4daa-b297-81eb53e7384c","Type":"ContainerStarted","Data":"ebf4aaa40d1d01e3c26b272ee565c54370454d5bf20e9cf2c3c36076426c1c4d"} Dec 09 11:29:12 crc kubenswrapper[4849]: I1209 11:29:12.535532 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:29:12 crc kubenswrapper[4849]: I1209 11:29:12.535557 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:29:12 crc kubenswrapper[4849]: E1209 11:29:12.536023 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qcffq" podUID="fa5f421b-d486-4b0d-a615-7887df025c00" Dec 09 11:29:12 crc kubenswrapper[4849]: I1209 11:29:12.535676 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:29:12 crc kubenswrapper[4849]: E1209 11:29:12.536144 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 11:29:12 crc kubenswrapper[4849]: I1209 11:29:12.535610 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:29:12 crc kubenswrapper[4849]: E1209 11:29:12.536251 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 11:29:12 crc kubenswrapper[4849]: E1209 11:29:12.536304 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 11:29:12 crc kubenswrapper[4849]: I1209 11:29:12.604571 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:29:14 crc kubenswrapper[4849]: I1209 11:29:14.535729 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:29:14 crc kubenswrapper[4849]: I1209 11:29:14.535765 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:29:14 crc kubenswrapper[4849]: I1209 11:29:14.535904 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:29:14 crc kubenswrapper[4849]: I1209 11:29:14.536761 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:29:14 crc kubenswrapper[4849]: I1209 11:29:14.538327 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 09 11:29:14 crc kubenswrapper[4849]: I1209 11:29:14.538496 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 09 11:29:14 crc kubenswrapper[4849]: I1209 11:29:14.538949 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 09 11:29:14 crc kubenswrapper[4849]: I1209 11:29:14.539000 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 09 11:29:14 crc kubenswrapper[4849]: I1209 11:29:14.539121 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 09 11:29:14 crc kubenswrapper[4849]: I1209 11:29:14.539738 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.269542 4849 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.323545 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-25rtx"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.324188 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.325152 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jlw2t"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.325888 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.326667 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.327298 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.336038 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.346539 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ff9d1831-83f7-46b5-a110-4ef163ec3516-audit-dir\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.346571 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adfc03a3-e122-4ebf-b69c-6fdc39087856-serving-cert\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.346593 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/adfc03a3-e122-4ebf-b69c-6fdc39087856-node-pullsecrets\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.346616 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.346618 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.346636 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/158b2582-edcf-45dc-908a-28112166eab0-machine-approver-tls\") pod \"machine-approver-56656f9798-wzkn8\" (UID: \"158b2582-edcf-45dc-908a-28112166eab0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.346781 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/adfc03a3-e122-4ebf-b69c-6fdc39087856-etcd-client\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.346848 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/158b2582-edcf-45dc-908a-28112166eab0-auth-proxy-config\") pod \"machine-approver-56656f9798-wzkn8\" (UID: \"158b2582-edcf-45dc-908a-28112166eab0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.346892 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.346930 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.346987 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.347021 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.347078 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlqhk\" (UniqueName: \"kubernetes.io/projected/158b2582-edcf-45dc-908a-28112166eab0-kube-api-access-dlqhk\") pod \"machine-approver-56656f9798-wzkn8\" (UID: \"158b2582-edcf-45dc-908a-28112166eab0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.347108 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfc03a3-e122-4ebf-b69c-6fdc39087856-config\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.347147 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.347144 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.347329 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fk78\" (UniqueName: \"kubernetes.io/projected/ff9d1831-83f7-46b5-a110-4ef163ec3516-kube-api-access-2fk78\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.347438 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.347532 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/adfc03a3-e122-4ebf-b69c-6fdc39087856-encryption-config\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.347625 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/adfc03a3-e122-4ebf-b69c-6fdc39087856-image-import-ca\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.347725 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158b2582-edcf-45dc-908a-28112166eab0-config\") pod \"machine-approver-56656f9798-wzkn8\" (UID: \"158b2582-edcf-45dc-908a-28112166eab0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.347811 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adfc03a3-e122-4ebf-b69c-6fdc39087856-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.347894 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/adfc03a3-e122-4ebf-b69c-6fdc39087856-etcd-serving-ca\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.347999 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.348082 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.348167 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.348258 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.348346 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-audit-policies\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.348462 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn8hr\" (UniqueName: \"kubernetes.io/projected/adfc03a3-e122-4ebf-b69c-6fdc39087856-kube-api-access-wn8hr\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.348539 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/adfc03a3-e122-4ebf-b69c-6fdc39087856-audit\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.348625 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/adfc03a3-e122-4ebf-b69c-6fdc39087856-audit-dir\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.350280 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zqkl8"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.351902 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.411747 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.412246 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.412248 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.412382 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.412183 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.412105 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.412704 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.412777 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.412853 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.412941 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.412982 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.423419 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.423486 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.428752 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.444483 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.448198 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.448324 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.448461 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.448503 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.448512 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.448632 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.448718 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449043 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449083 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449107 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449136 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-client-ca\") pod \"route-controller-manager-6576b87f9c-q5fhv\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449159 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449162 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449569 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn8hr\" (UniqueName: \"kubernetes.io/projected/adfc03a3-e122-4ebf-b69c-6fdc39087856-kube-api-access-wn8hr\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449608 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-audit-policies\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449633 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d41acaad-c321-4016-8330-f6de9b6e9326-config\") pod \"machine-api-operator-5694c8668f-zqkl8\" (UID: \"d41acaad-c321-4016-8330-f6de9b6e9326\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449661 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/adfc03a3-e122-4ebf-b69c-6fdc39087856-audit\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449688 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/adfc03a3-e122-4ebf-b69c-6fdc39087856-audit-dir\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449721 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ff9d1831-83f7-46b5-a110-4ef163ec3516-audit-dir\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449745 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adfc03a3-e122-4ebf-b69c-6fdc39087856-serving-cert\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449788 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449816 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/adfc03a3-e122-4ebf-b69c-6fdc39087856-node-pullsecrets\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449840 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jgfr\" (UniqueName: \"kubernetes.io/projected/d41acaad-c321-4016-8330-f6de9b6e9326-kube-api-access-7jgfr\") pod \"machine-api-operator-5694c8668f-zqkl8\" (UID: \"d41acaad-c321-4016-8330-f6de9b6e9326\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449868 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/158b2582-edcf-45dc-908a-28112166eab0-machine-approver-tls\") pod \"machine-approver-56656f9798-wzkn8\" (UID: \"158b2582-edcf-45dc-908a-28112166eab0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449887 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-serving-cert\") pod \"route-controller-manager-6576b87f9c-q5fhv\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449929 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/158b2582-edcf-45dc-908a-28112166eab0-auth-proxy-config\") pod \"machine-approver-56656f9798-wzkn8\" (UID: \"158b2582-edcf-45dc-908a-28112166eab0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.449452 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.450007 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/adfc03a3-e122-4ebf-b69c-6fdc39087856-etcd-client\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.450084 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/adfc03a3-e122-4ebf-b69c-6fdc39087856-audit-dir\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.450160 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ff9d1831-83f7-46b5-a110-4ef163ec3516-audit-dir\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.450193 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.450317 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-config\") pod \"route-controller-manager-6576b87f9c-q5fhv\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.450370 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.450393 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.450943 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.450996 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-audit-policies\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.451526 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.451804 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.452113 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.452384 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.452710 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-l6kz7"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.459272 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8wwr8"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.459659 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/158b2582-edcf-45dc-908a-28112166eab0-machine-approver-tls\") pod \"machine-approver-56656f9798-wzkn8\" (UID: \"158b2582-edcf-45dc-908a-28112166eab0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.457021 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/adfc03a3-e122-4ebf-b69c-6fdc39087856-audit\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.457101 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adfc03a3-e122-4ebf-b69c-6fdc39087856-serving-cert\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.457295 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.459848 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.459899 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlqhk\" (UniqueName: \"kubernetes.io/projected/158b2582-edcf-45dc-908a-28112166eab0-kube-api-access-dlqhk\") pod \"machine-approver-56656f9798-wzkn8\" (UID: \"158b2582-edcf-45dc-908a-28112166eab0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.459960 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfc03a3-e122-4ebf-b69c-6fdc39087856-config\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.460021 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.457808 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.460033 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.460194 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fk78\" (UniqueName: \"kubernetes.io/projected/ff9d1831-83f7-46b5-a110-4ef163ec3516-kube-api-access-2fk78\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.460226 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.460277 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/adfc03a3-e122-4ebf-b69c-6fdc39087856-encryption-config\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.460315 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d41acaad-c321-4016-8330-f6de9b6e9326-images\") pod \"machine-api-operator-5694c8668f-zqkl8\" (UID: \"d41acaad-c321-4016-8330-f6de9b6e9326\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.460346 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/adfc03a3-e122-4ebf-b69c-6fdc39087856-image-import-ca\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.460374 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158b2582-edcf-45dc-908a-28112166eab0-config\") pod \"machine-approver-56656f9798-wzkn8\" (UID: \"158b2582-edcf-45dc-908a-28112166eab0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.460397 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adfc03a3-e122-4ebf-b69c-6fdc39087856-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.460443 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d41acaad-c321-4016-8330-f6de9b6e9326-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zqkl8\" (UID: \"d41acaad-c321-4016-8330-f6de9b6e9326\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.460470 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lcz5\" (UniqueName: \"kubernetes.io/projected/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-kube-api-access-4lcz5\") pod \"route-controller-manager-6576b87f9c-q5fhv\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.460494 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/adfc03a3-e122-4ebf-b69c-6fdc39087856-etcd-serving-ca\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.456305 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/adfc03a3-e122-4ebf-b69c-6fdc39087856-node-pullsecrets\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.452783 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.461081 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/adfc03a3-e122-4ebf-b69c-6fdc39087856-etcd-serving-ca\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.461663 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.453063 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.459897 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gr4ld"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.453727 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.453793 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.454250 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.462604 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.457607 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.456260 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.466036 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.468968 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/adfc03a3-e122-4ebf-b69c-6fdc39087856-image-import-ca\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.467604 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158b2582-edcf-45dc-908a-28112166eab0-config\") pod \"machine-approver-56656f9798-wzkn8\" (UID: \"158b2582-edcf-45dc-908a-28112166eab0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.457885 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.469161 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.457922 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.467091 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.466873 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.469839 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z4fst"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.469849 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.469953 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/158b2582-edcf-45dc-908a-28112166eab0-auth-proxy-config\") pod \"machine-approver-56656f9798-wzkn8\" (UID: \"158b2582-edcf-45dc-908a-28112166eab0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.470049 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.470165 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.463090 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfc03a3-e122-4ebf-b69c-6fdc39087856-config\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.470915 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lhhqf"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.471302 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.471689 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.471977 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z4fst" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.472031 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.472124 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.472468 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.472674 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.480870 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.482795 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-25rtx"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.482859 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nshxd"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.483402 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-njgnq"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.483948 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-njgnq" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.486714 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.497694 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/adfc03a3-e122-4ebf-b69c-6fdc39087856-etcd-client\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.497757 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.499386 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.500346 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.500865 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.501458 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.501631 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-74c2r"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.502170 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-74c2r" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.503070 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-l6kz7"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.504696 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.505130 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.505529 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.505704 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.505853 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.505873 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.506090 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.506105 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.506322 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.506371 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.507399 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.507684 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.507745 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.507833 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.508067 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.508115 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.508329 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.510706 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.511257 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/adfc03a3-e122-4ebf-b69c-6fdc39087856-encryption-config\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.515959 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jlw2t"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.519152 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.519554 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.519631 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.519571 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.519796 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.520063 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.520984 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn8hr\" (UniqueName: \"kubernetes.io/projected/adfc03a3-e122-4ebf-b69c-6fdc39087856-kube-api-access-wn8hr\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.524774 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.525021 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.525238 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.531287 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.531773 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.531907 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.532002 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.532139 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.532796 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.551754 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.552240 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlqhk\" (UniqueName: \"kubernetes.io/projected/158b2582-edcf-45dc-908a-28112166eab0-kube-api-access-dlqhk\") pod \"machine-approver-56656f9798-wzkn8\" (UID: \"158b2582-edcf-45dc-908a-28112166eab0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.565620 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-config\") pod \"route-controller-manager-6576b87f9c-q5fhv\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.565902 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d41acaad-c321-4016-8330-f6de9b6e9326-images\") pod \"machine-api-operator-5694c8668f-zqkl8\" (UID: \"d41acaad-c321-4016-8330-f6de9b6e9326\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.566031 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d41acaad-c321-4016-8330-f6de9b6e9326-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zqkl8\" (UID: \"d41acaad-c321-4016-8330-f6de9b6e9326\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.566187 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lcz5\" (UniqueName: \"kubernetes.io/projected/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-kube-api-access-4lcz5\") pod \"route-controller-manager-6576b87f9c-q5fhv\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.566353 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-client-ca\") pod \"route-controller-manager-6576b87f9c-q5fhv\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.566453 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d41acaad-c321-4016-8330-f6de9b6e9326-config\") pod \"machine-api-operator-5694c8668f-zqkl8\" (UID: \"d41acaad-c321-4016-8330-f6de9b6e9326\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.566583 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jgfr\" (UniqueName: \"kubernetes.io/projected/d41acaad-c321-4016-8330-f6de9b6e9326-kube-api-access-7jgfr\") pod \"machine-api-operator-5694c8668f-zqkl8\" (UID: \"d41acaad-c321-4016-8330-f6de9b6e9326\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.566701 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-serving-cert\") pod \"route-controller-manager-6576b87f9c-q5fhv\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.576429 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-client-ca\") pod \"route-controller-manager-6576b87f9c-q5fhv\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.594499 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d41acaad-c321-4016-8330-f6de9b6e9326-images\") pod \"machine-api-operator-5694c8668f-zqkl8\" (UID: \"d41acaad-c321-4016-8330-f6de9b6e9326\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.595548 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d41acaad-c321-4016-8330-f6de9b6e9326-config\") pod \"machine-api-operator-5694c8668f-zqkl8\" (UID: \"d41acaad-c321-4016-8330-f6de9b6e9326\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.600101 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-config\") pod \"route-controller-manager-6576b87f9c-q5fhv\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.600784 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.602124 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-serving-cert\") pod \"route-controller-manager-6576b87f9c-q5fhv\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.604234 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.605208 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z4fst"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.605229 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.605654 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zqkl8"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.605675 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.604286 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.606032 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.606504 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.604483 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.607035 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.605783 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.607609 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.607736 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.607875 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.608007 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.608358 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.608537 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.609506 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.610389 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.610502 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.610558 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.610586 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.610653 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.610708 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.610812 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.611271 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.611389 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.611694 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.611855 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.612030 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.612132 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.612343 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.612563 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.612670 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lhhqf"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.612816 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.617120 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d41acaad-c321-4016-8330-f6de9b6e9326-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zqkl8\" (UID: \"d41acaad-c321-4016-8330-f6de9b6e9326\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.618021 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.618164 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.618505 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.619837 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adfc03a3-e122-4ebf-b69c-6fdc39087856-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jlw2t\" (UID: \"adfc03a3-e122-4ebf-b69c-6fdc39087856\") " pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.623033 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dvfn7"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.623480 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.623549 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dvfn7" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.624012 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.624404 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.625096 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.626155 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.627344 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-665jx"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.631656 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.633028 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8wwr8"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.633133 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-665jx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.635494 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.637160 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.638364 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.642192 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.642548 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.642813 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.653421 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.654847 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.655160 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.657274 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.659249 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fk78\" (UniqueName: \"kubernetes.io/projected/ff9d1831-83f7-46b5-a110-4ef163ec3516-kube-api-access-2fk78\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.659275 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.660543 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.661624 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.662835 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.663016 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.666075 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.668487 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-25rtx\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.692036 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.694532 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.695226 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7xksm"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.695618 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.695692 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.695974 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.697117 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.698456 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.701308 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.705291 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vlzz2"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.706871 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vlzz2" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.709133 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.710250 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fvjw7"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.710539 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.712192 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.714152 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-flzss"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.715501 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.715973 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.716606 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.716780 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9zsgd"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.717136 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.717386 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.718371 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.719010 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.719357 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qmrg5"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.721009 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.722486 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gr4ld"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.725281 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fvjw7" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.728287 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.730962 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-njgnq"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.735096 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nshxd"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.737831 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-74c2r"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.739907 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kj9rl"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.740596 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kj9rl" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.745292 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.747179 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.750142 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.751579 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.755912 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fvjw7"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.757363 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7xksm"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.757487 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.758974 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.761919 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vlzz2"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.768161 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.769314 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.772052 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.777826 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dvfn7"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.777901 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-h72pd"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.789443 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.789565 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-h72pd" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.790844 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.800691 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.805326 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.805353 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.805364 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.812606 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.812668 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-665jx"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.818011 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.823606 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9zsgd"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.824704 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-h72pd"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.839726 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.840177 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qmrg5"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.842791 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-n726k"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.843499 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n726k"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.843585 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n726k" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.872025 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.872876 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lcz5\" (UniqueName: \"kubernetes.io/projected/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-kube-api-access-4lcz5\") pod \"route-controller-manager-6576b87f9c-q5fhv\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.880309 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jgfr\" (UniqueName: \"kubernetes.io/projected/d41acaad-c321-4016-8330-f6de9b6e9326-kube-api-access-7jgfr\") pod \"machine-api-operator-5694c8668f-zqkl8\" (UID: \"d41acaad-c321-4016-8330-f6de9b6e9326\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.886672 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.907146 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.928355 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.941719 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.946376 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.965389 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jlw2t"] Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.966959 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.969556 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.975879 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" Dec 09 11:29:20 crc kubenswrapper[4849]: I1209 11:29:20.986699 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 09 11:29:21 crc kubenswrapper[4849]: W1209 11:29:21.005127 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadfc03a3_e122_4ebf_b69c_6fdc39087856.slice/crio-005b10dfb129afa3b1b92bd34f467027773c21c637862879c0e108ae46b73eb4 WatchSource:0}: Error finding container 005b10dfb129afa3b1b92bd34f467027773c21c637862879c0e108ae46b73eb4: Status 404 returned error can't find the container with id 005b10dfb129afa3b1b92bd34f467027773c21c637862879c0e108ae46b73eb4 Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.006821 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.046661 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.072143 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.086152 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.106973 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.126573 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.132258 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.132311 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.149691 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.153729 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-25rtx"] Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.166239 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.186806 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv"] Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.190672 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.206910 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.228365 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.249374 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.267676 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.281887 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zqkl8"] Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.287203 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.306247 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.326986 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.347716 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.367280 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.386297 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.406973 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.430842 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" event={"ID":"adfc03a3-e122-4ebf-b69c-6fdc39087856","Type":"ContainerStarted","Data":"005b10dfb129afa3b1b92bd34f467027773c21c637862879c0e108ae46b73eb4"} Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.431779 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" event={"ID":"158b2582-edcf-45dc-908a-28112166eab0","Type":"ContainerStarted","Data":"959f6b51bd95fe1e091510ca535d7322d0245dc80de24f5b6df4e04cba57c379"} Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.432705 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" event={"ID":"c7d83c17-96de-4f5b-b3c0-199d7fa21fab","Type":"ContainerStarted","Data":"d191806d913e3b03bec4f30e057baa1f0f392bb580d3ebce5764dfe4c842b0ac"} Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.433528 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" event={"ID":"ff9d1831-83f7-46b5-a110-4ef163ec3516","Type":"ContainerStarted","Data":"941f27af35a0c953f35db39fdd3915a7b6a8a0df8497752b5047f04427004124"} Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.434349 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.447834 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.466592 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.487347 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.506622 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.527101 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.546690 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.566965 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.586518 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.606846 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.626643 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.647587 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.667511 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.687054 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.706986 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.724760 4849 request.go:700] Waited for 1.006855314s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.726265 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.746285 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.766572 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.785733 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.807105 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.827340 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.846802 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.867288 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.887730 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.907278 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.926797 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.947257 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.966739 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 09 11:29:21 crc kubenswrapper[4849]: I1209 11:29:21.986735 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.014504 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.026749 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.046661 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.066258 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.086888 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.107274 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.127484 4849 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.146384 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.166886 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.187160 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.206879 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.226612 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.246770 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.267665 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.286145 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.306920 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.327041 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.347074 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.366447 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.385761 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.426947 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.438772 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" event={"ID":"d41acaad-c321-4016-8330-f6de9b6e9326","Type":"ContainerStarted","Data":"11cb4f0c35b7beb383fe2829a0d7d001084cd0d064bee463e887fdb093cd0733"} Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.440470 4849 generic.go:334] "Generic (PLEG): container finished" podID="adfc03a3-e122-4ebf-b69c-6fdc39087856" containerID="a6392151382ef8cd4e76215ad8a65583841359a25f04b508ae8ec67a02d7c1e3" exitCode=0 Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.440537 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" event={"ID":"adfc03a3-e122-4ebf-b69c-6fdc39087856","Type":"ContainerDied","Data":"a6392151382ef8cd4e76215ad8a65583841359a25f04b508ae8ec67a02d7c1e3"} Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.442003 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" event={"ID":"158b2582-edcf-45dc-908a-28112166eab0","Type":"ContainerStarted","Data":"234ff8d05495a41568625ca8883cff0ce22bc16f4ce2b4cab69c637766d408f9"} Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.447350 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.461195 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ca549b95-b862-43e6-8540-595d05555d3c-registry-certificates\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.461248 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lgdg\" (UniqueName: \"kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-kube-api-access-9lgdg\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.461273 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r75f\" (UniqueName: \"kubernetes.io/projected/00d38b26-84b0-4b2e-92d1-d6c0f63f729d-kube-api-access-5r75f\") pod \"kube-storage-version-migrator-operator-b67b599dd-xk4zj\" (UID: \"00d38b26-84b0-4b2e-92d1-d6c0f63f729d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.461296 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ca549b95-b862-43e6-8540-595d05555d3c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.461662 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d38b26-84b0-4b2e-92d1-d6c0f63f729d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xk4zj\" (UID: \"00d38b26-84b0-4b2e-92d1-d6c0f63f729d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.462007 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-registry-tls\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.462049 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca549b95-b862-43e6-8540-595d05555d3c-trusted-ca\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.462159 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.462249 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ca549b95-b862-43e6-8540-595d05555d3c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.462277 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d38b26-84b0-4b2e-92d1-d6c0f63f729d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xk4zj\" (UID: \"00d38b26-84b0-4b2e-92d1-d6c0f63f729d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj" Dec 09 11:29:22 crc kubenswrapper[4849]: E1209 11:29:22.462545 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:22.962528521 +0000 UTC m=+145.502412927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.462577 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-bound-sa-token\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.466786 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.486820 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.562967 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:22 crc kubenswrapper[4849]: E1209 11:29:22.563119 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:23.063092159 +0000 UTC m=+145.602976475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563394 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpf62\" (UniqueName: \"kubernetes.io/projected/a9eab53b-c723-460f-b55e-88b441b25a76-kube-api-access-qpf62\") pod \"cluster-samples-operator-665b6dd947-z4fst\" (UID: \"a9eab53b-c723-460f-b55e-88b441b25a76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z4fst" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563458 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e7f6648a-8487-415a-bdd3-16a27fea4871-metrics-tls\") pod \"ingress-operator-5b745b69d9-zm9cl\" (UID: \"e7f6648a-8487-415a-bdd3-16a27fea4871\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563497 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db29ce09-9dfc-44aa-9eec-3a431d33e0e6-serving-cert\") pod \"openshift-config-operator-7777fb866f-zjfc7\" (UID: \"db29ce09-9dfc-44aa-9eec-3a431d33e0e6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563520 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22s45\" (UniqueName: \"kubernetes.io/projected/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-kube-api-access-22s45\") pod \"controller-manager-879f6c89f-8wwr8\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563541 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7f6648a-8487-415a-bdd3-16a27fea4871-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zm9cl\" (UID: \"e7f6648a-8487-415a-bdd3-16a27fea4871\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563590 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lgdg\" (UniqueName: \"kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-kube-api-access-9lgdg\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563611 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7f6648a-8487-415a-bdd3-16a27fea4871-trusted-ca\") pod \"ingress-operator-5b745b69d9-zm9cl\" (UID: \"e7f6648a-8487-415a-bdd3-16a27fea4871\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563637 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8b185e5-e51b-4945-baca-221a382c0714-serving-cert\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563659 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96bbdc9d-911c-4916-a775-6ad2f827f831-service-ca-bundle\") pod \"authentication-operator-69f744f599-gr4ld\" (UID: \"96bbdc9d-911c-4916-a775-6ad2f827f831\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563692 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3cfa026c-5ae3-47cc-aee8-b06522339617-metrics-tls\") pod \"dns-default-h72pd\" (UID: \"3cfa026c-5ae3-47cc-aee8-b06522339617\") " pod="openshift-dns/dns-default-h72pd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563715 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r75f\" (UniqueName: \"kubernetes.io/projected/00d38b26-84b0-4b2e-92d1-d6c0f63f729d-kube-api-access-5r75f\") pod \"kube-storage-version-migrator-operator-b67b599dd-xk4zj\" (UID: \"00d38b26-84b0-4b2e-92d1-d6c0f63f729d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563742 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ca549b95-b862-43e6-8540-595d05555d3c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563763 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-config\") pod \"controller-manager-879f6c89f-8wwr8\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563788 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96bbdc9d-911c-4916-a775-6ad2f827f831-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gr4ld\" (UID: \"96bbdc9d-911c-4916-a775-6ad2f827f831\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563810 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cadfb13c-1ae0-4e22-b3df-fc477f51d4dd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dxvqz\" (UID: \"cadfb13c-1ae0-4e22-b3df-fc477f51d4dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563832 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78vcz\" (UniqueName: \"kubernetes.io/projected/cadfb13c-1ae0-4e22-b3df-fc477f51d4dd-kube-api-access-78vcz\") pod \"openshift-controller-manager-operator-756b6f6bc6-dxvqz\" (UID: \"cadfb13c-1ae0-4e22-b3df-fc477f51d4dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563864 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d38b26-84b0-4b2e-92d1-d6c0f63f729d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xk4zj\" (UID: \"00d38b26-84b0-4b2e-92d1-d6c0f63f729d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563897 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-trusted-ca-bundle\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563921 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgrqm\" (UniqueName: \"kubernetes.io/projected/f1b338b8-701c-4c0e-87ba-f830190df7eb-kube-api-access-pgrqm\") pod \"ingress-canary-n726k\" (UID: \"f1b338b8-701c-4c0e-87ba-f830190df7eb\") " pod="openshift-ingress-canary/ingress-canary-n726k" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563945 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8b185e5-e51b-4945-baca-221a382c0714-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563968 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg8dz\" (UniqueName: \"kubernetes.io/projected/ecade532-431e-464e-af8f-bdb1fe23ec47-kube-api-access-pg8dz\") pod \"cluster-image-registry-operator-dc59b4c8b-xtskh\" (UID: \"ecade532-431e-464e-af8f-bdb1fe23ec47\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.563990 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e8b185e5-e51b-4945-baca-221a382c0714-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564014 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwjcj\" (UniqueName: \"kubernetes.io/projected/8e9eff9a-660a-450b-9c63-c473634e7d0a-kube-api-access-pwjcj\") pod \"collect-profiles-29421315-g667j\" (UID: \"8e9eff9a-660a-450b-9c63-c473634e7d0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564040 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brhnq\" (UniqueName: \"kubernetes.io/projected/3cfa026c-5ae3-47cc-aee8-b06522339617-kube-api-access-brhnq\") pod \"dns-default-h72pd\" (UID: \"3cfa026c-5ae3-47cc-aee8-b06522339617\") " pod="openshift-dns/dns-default-h72pd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564063 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-registry-tls\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564088 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e9eff9a-660a-450b-9c63-c473634e7d0a-config-volume\") pod \"collect-profiles-29421315-g667j\" (UID: \"8e9eff9a-660a-450b-9c63-c473634e7d0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564112 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca549b95-b862-43e6-8540-595d05555d3c-trusted-ca\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564140 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564164 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ca549b95-b862-43e6-8540-595d05555d3c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564188 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f1b9f2-8afc-4817-a97b-4788f99674fe-serving-cert\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564209 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-oauth-serving-cert\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564230 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96bbdc9d-911c-4916-a775-6ad2f827f831-serving-cert\") pod \"authentication-operator-69f744f599-gr4ld\" (UID: \"96bbdc9d-911c-4916-a775-6ad2f827f831\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564251 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e9eff9a-660a-450b-9c63-c473634e7d0a-secret-volume\") pod \"collect-profiles-29421315-g667j\" (UID: \"8e9eff9a-660a-450b-9c63-c473634e7d0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564273 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d38b26-84b0-4b2e-92d1-d6c0f63f729d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xk4zj\" (UID: \"00d38b26-84b0-4b2e-92d1-d6c0f63f729d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564296 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/db29ce09-9dfc-44aa-9eec-3a431d33e0e6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zjfc7\" (UID: \"db29ce09-9dfc-44aa-9eec-3a431d33e0e6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564319 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4f1b9f2-8afc-4817-a97b-4788f99674fe-etcd-client\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564339 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh86z\" (UniqueName: \"kubernetes.io/projected/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-kube-api-access-lh86z\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564360 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96bbdc9d-911c-4916-a775-6ad2f827f831-config\") pod \"authentication-operator-69f744f599-gr4ld\" (UID: \"96bbdc9d-911c-4916-a775-6ad2f827f831\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564499 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-bound-sa-token\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564570 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ecade532-431e-464e-af8f-bdb1fe23ec47-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xtskh\" (UID: \"ecade532-431e-464e-af8f-bdb1fe23ec47\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564597 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhlf9\" (UniqueName: \"kubernetes.io/projected/db29ce09-9dfc-44aa-9eec-3a431d33e0e6-kube-api-access-dhlf9\") pod \"openshift-config-operator-7777fb866f-zjfc7\" (UID: \"db29ce09-9dfc-44aa-9eec-3a431d33e0e6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564619 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9p2z\" (UniqueName: \"kubernetes.io/projected/b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f-kube-api-access-m9p2z\") pod \"machine-config-controller-84d6567774-pwnk8\" (UID: \"b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564675 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cfa026c-5ae3-47cc-aee8-b06522339617-config-volume\") pod \"dns-default-h72pd\" (UID: \"3cfa026c-5ae3-47cc-aee8-b06522339617\") " pod="openshift-dns/dns-default-h72pd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564696 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-oauth-config\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564746 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-config\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564771 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ecade532-431e-464e-af8f-bdb1fe23ec47-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xtskh\" (UID: \"ecade532-431e-464e-af8f-bdb1fe23ec47\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564822 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a4f1b9f2-8afc-4817-a97b-4788f99674fe-etcd-ca\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564845 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8b185e5-e51b-4945-baca-221a382c0714-audit-dir\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564868 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4md5m\" (UniqueName: \"kubernetes.io/projected/8948a613-56f3-4a89-adb7-2c4a2262f2ee-kube-api-access-4md5m\") pod \"downloads-7954f5f757-74c2r\" (UID: \"8948a613-56f3-4a89-adb7-2c4a2262f2ee\") " pod="openshift-console/downloads-7954f5f757-74c2r" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564920 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-service-ca\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.564942 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7rll\" (UniqueName: \"kubernetes.io/projected/96bbdc9d-911c-4916-a775-6ad2f827f831-kube-api-access-n7rll\") pod \"authentication-operator-69f744f599-gr4ld\" (UID: \"96bbdc9d-911c-4916-a775-6ad2f827f831\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.565080 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn8mh\" (UniqueName: \"kubernetes.io/projected/e8b185e5-e51b-4945-baca-221a382c0714-kube-api-access-pn8mh\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.565104 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fmcd\" (UniqueName: \"kubernetes.io/projected/a477726d-aae1-47d9-8a3a-70316f991c29-kube-api-access-6fmcd\") pod \"dns-operator-744455d44c-njgnq\" (UID: \"a477726d-aae1-47d9-8a3a-70316f991c29\") " pod="openshift-dns-operator/dns-operator-744455d44c-njgnq" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.565238 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e8b185e5-e51b-4945-baca-221a382c0714-encryption-config\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.569918 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a4f1b9f2-8afc-4817-a97b-4788f99674fe-etcd-service-ca\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.569981 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4dcf\" (UniqueName: \"kubernetes.io/projected/a4f1b9f2-8afc-4817-a97b-4788f99674fe-kube-api-access-z4dcf\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.570017 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f-proxy-tls\") pod \"machine-config-controller-84d6567774-pwnk8\" (UID: \"b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.570082 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f1b9f2-8afc-4817-a97b-4788f99674fe-config\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: E1209 11:29:22.570488 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:23.070468376 +0000 UTC m=+145.610352702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.572669 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca549b95-b862-43e6-8540-595d05555d3c-trusted-ca\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.579195 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d38b26-84b0-4b2e-92d1-d6c0f63f729d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xk4zj\" (UID: \"00d38b26-84b0-4b2e-92d1-d6c0f63f729d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.579322 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d38b26-84b0-4b2e-92d1-d6c0f63f729d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xk4zj\" (UID: \"00d38b26-84b0-4b2e-92d1-d6c0f63f729d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.587599 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-registry-tls\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.591662 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ca549b95-b862-43e6-8540-595d05555d3c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.596837 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ca549b95-b862-43e6-8540-595d05555d3c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.596916 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-client-ca\") pod \"controller-manager-879f6c89f-8wwr8\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.596964 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4gxc\" (UniqueName: \"kubernetes.io/projected/e7f6648a-8487-415a-bdd3-16a27fea4871-kube-api-access-n4gxc\") pod \"ingress-operator-5b745b69d9-zm9cl\" (UID: \"e7f6648a-8487-415a-bdd3-16a27fea4871\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.596987 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e8b185e5-e51b-4945-baca-221a382c0714-audit-policies\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.597010 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-serving-cert\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.597079 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8wwr8\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.597101 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1b338b8-701c-4c0e-87ba-f830190df7eb-cert\") pod \"ingress-canary-n726k\" (UID: \"f1b338b8-701c-4c0e-87ba-f830190df7eb\") " pod="openshift-ingress-canary/ingress-canary-n726k" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.597127 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-serving-cert\") pod \"controller-manager-879f6c89f-8wwr8\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.597149 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pwnk8\" (UID: \"b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.597190 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e8b185e5-e51b-4945-baca-221a382c0714-etcd-client\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.597214 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cadfb13c-1ae0-4e22-b3df-fc477f51d4dd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dxvqz\" (UID: \"cadfb13c-1ae0-4e22-b3df-fc477f51d4dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.597241 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ecade532-431e-464e-af8f-bdb1fe23ec47-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xtskh\" (UID: \"ecade532-431e-464e-af8f-bdb1fe23ec47\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.597264 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a477726d-aae1-47d9-8a3a-70316f991c29-metrics-tls\") pod \"dns-operator-744455d44c-njgnq\" (UID: \"a477726d-aae1-47d9-8a3a-70316f991c29\") " pod="openshift-dns-operator/dns-operator-744455d44c-njgnq" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.597287 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a9eab53b-c723-460f-b55e-88b441b25a76-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-z4fst\" (UID: \"a9eab53b-c723-460f-b55e-88b441b25a76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z4fst" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.597329 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ca549b95-b862-43e6-8540-595d05555d3c-registry-certificates\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.598446 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ca549b95-b862-43e6-8540-595d05555d3c-registry-certificates\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.618325 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lgdg\" (UniqueName: \"kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-kube-api-access-9lgdg\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.624571 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-bound-sa-token\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.643293 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r75f\" (UniqueName: \"kubernetes.io/projected/00d38b26-84b0-4b2e-92d1-d6c0f63f729d-kube-api-access-5r75f\") pod \"kube-storage-version-migrator-operator-b67b599dd-xk4zj\" (UID: \"00d38b26-84b0-4b2e-92d1-d6c0f63f729d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.700041 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.700216 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f1b9f2-8afc-4817-a97b-4788f99674fe-serving-cert\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.700237 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh86z\" (UniqueName: \"kubernetes.io/projected/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-kube-api-access-lh86z\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: E1209 11:29:22.700266 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:23.200235568 +0000 UTC m=+145.740119924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.700311 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96bbdc9d-911c-4916-a775-6ad2f827f831-config\") pod \"authentication-operator-69f744f599-gr4ld\" (UID: \"96bbdc9d-911c-4916-a775-6ad2f827f831\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.700596 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96bbdc9d-911c-4916-a775-6ad2f827f831-serving-cert\") pod \"authentication-operator-69f744f599-gr4ld\" (UID: \"96bbdc9d-911c-4916-a775-6ad2f827f831\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.700689 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1f27dee9-7157-455c-84d5-24c51b874b53-apiservice-cert\") pod \"packageserver-d55dfcdfc-85jmr\" (UID: \"1f27dee9-7157-455c-84d5-24c51b874b53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.701766 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/29dd11e8-40b1-485b-a1c7-4df44220d7b0-srv-cert\") pod \"olm-operator-6b444d44fb-ddv82\" (UID: \"29dd11e8-40b1-485b-a1c7-4df44220d7b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.701880 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/43fc7f6a-abb1-476f-bfb3-15ce82c13f41-images\") pod \"machine-config-operator-74547568cd-lchf7\" (UID: \"43fc7f6a-abb1-476f-bfb3-15ce82c13f41\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.701929 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxtms\" (UniqueName: \"kubernetes.io/projected/2314f111-b042-40c3-832c-1c0d49c5e088-kube-api-access-rxtms\") pod \"console-operator-58897d9998-9zsgd\" (UID: \"2314f111-b042-40c3-832c-1c0d49c5e088\") " pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.702031 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/db29ce09-9dfc-44aa-9eec-3a431d33e0e6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zjfc7\" (UID: \"db29ce09-9dfc-44aa-9eec-3a431d33e0e6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.702058 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtfzb\" (UniqueName: \"kubernetes.io/projected/4a0fccbe-ade0-4666-8758-d67f3c74e8e7-kube-api-access-xtfzb\") pod \"control-plane-machine-set-operator-78cbb6b69f-dvfn7\" (UID: \"4a0fccbe-ade0-4666-8758-d67f3c74e8e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dvfn7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.702118 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ecade532-431e-464e-af8f-bdb1fe23ec47-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xtskh\" (UID: \"ecade532-431e-464e-af8f-bdb1fe23ec47\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.702145 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhlf9\" (UniqueName: \"kubernetes.io/projected/db29ce09-9dfc-44aa-9eec-3a431d33e0e6-kube-api-access-dhlf9\") pod \"openshift-config-operator-7777fb866f-zjfc7\" (UID: \"db29ce09-9dfc-44aa-9eec-3a431d33e0e6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.702175 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bljt2\" (UniqueName: \"kubernetes.io/projected/2d553207-9e63-4091-955d-35b3a8625ddb-kube-api-access-bljt2\") pod \"multus-admission-controller-857f4d67dd-665jx\" (UID: \"2d553207-9e63-4091-955d-35b3a8625ddb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-665jx" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.702668 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/db29ce09-9dfc-44aa-9eec-3a431d33e0e6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zjfc7\" (UID: \"db29ce09-9dfc-44aa-9eec-3a431d33e0e6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.704329 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ecade532-431e-464e-af8f-bdb1fe23ec47-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xtskh\" (UID: \"ecade532-431e-464e-af8f-bdb1fe23ec47\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.706154 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96bbdc9d-911c-4916-a775-6ad2f827f831-config\") pod \"authentication-operator-69f744f599-gr4ld\" (UID: \"96bbdc9d-911c-4916-a775-6ad2f827f831\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.706173 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab0b2db-189a-44e6-a904-49c45fca1a3e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8pb5v\" (UID: \"2ab0b2db-189a-44e6-a904-49c45fca1a3e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.706390 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f1b9f2-8afc-4817-a97b-4788f99674fe-serving-cert\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.706401 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ecade532-431e-464e-af8f-bdb1fe23ec47-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xtskh\" (UID: \"ecade532-431e-464e-af8f-bdb1fe23ec47\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.708832 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4md5m\" (UniqueName: \"kubernetes.io/projected/8948a613-56f3-4a89-adb7-2c4a2262f2ee-kube-api-access-4md5m\") pod \"downloads-7954f5f757-74c2r\" (UID: \"8948a613-56f3-4a89-adb7-2c4a2262f2ee\") " pod="openshift-console/downloads-7954f5f757-74c2r" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.708874 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-oauth-config\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.708904 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-config\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.708938 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/43fc7f6a-abb1-476f-bfb3-15ce82c13f41-proxy-tls\") pod \"machine-config-operator-74547568cd-lchf7\" (UID: \"43fc7f6a-abb1-476f-bfb3-15ce82c13f41\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.708966 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ab0b2db-189a-44e6-a904-49c45fca1a3e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8pb5v\" (UID: \"2ab0b2db-189a-44e6-a904-49c45fca1a3e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.708997 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-service-ca\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709027 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn8mh\" (UniqueName: \"kubernetes.io/projected/e8b185e5-e51b-4945-baca-221a382c0714-kube-api-access-pn8mh\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709058 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hjkt\" (UniqueName: \"kubernetes.io/projected/c3d88dfe-fa31-4759-baa6-6c847eb53020-kube-api-access-8hjkt\") pod \"router-default-5444994796-flzss\" (UID: \"c3d88dfe-fa31-4759-baa6-6c847eb53020\") " pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709086 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f-proxy-tls\") pod \"machine-config-controller-84d6567774-pwnk8\" (UID: \"b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709116 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4e1464d3-a5a6-4fc8-a091-77f41d391939-signing-key\") pod \"service-ca-9c57cc56f-fvjw7\" (UID: \"4e1464d3-a5a6-4fc8-a091-77f41d391939\") " pod="openshift-service-ca/service-ca-9c57cc56f-fvjw7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709165 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-client-ca\") pod \"controller-manager-879f6c89f-8wwr8\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709200 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfbc3e4e-babb-4119-9343-68c87540802e-config\") pod \"kube-controller-manager-operator-78b949d7b-8g6j2\" (UID: \"dfbc3e4e-babb-4119-9343-68c87540802e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709230 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1b338b8-701c-4c0e-87ba-f830190df7eb-cert\") pod \"ingress-canary-n726k\" (UID: \"f1b338b8-701c-4c0e-87ba-f830190df7eb\") " pod="openshift-ingress-canary/ingress-canary-n726k" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709277 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pwnk8\" (UID: \"b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709306 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4e1464d3-a5a6-4fc8-a091-77f41d391939-signing-cabundle\") pod \"service-ca-9c57cc56f-fvjw7\" (UID: \"4e1464d3-a5a6-4fc8-a091-77f41d391939\") " pod="openshift-service-ca/service-ca-9c57cc56f-fvjw7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709332 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ecade532-431e-464e-af8f-bdb1fe23ec47-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xtskh\" (UID: \"ecade532-431e-464e-af8f-bdb1fe23ec47\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709364 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22s45\" (UniqueName: \"kubernetes.io/projected/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-kube-api-access-22s45\") pod \"controller-manager-879f6c89f-8wwr8\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709399 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-csi-data-dir\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709452 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7f6648a-8487-415a-bdd3-16a27fea4871-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zm9cl\" (UID: \"e7f6648a-8487-415a-bdd3-16a27fea4871\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709493 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7f6648a-8487-415a-bdd3-16a27fea4871-trusted-ca\") pod \"ingress-operator-5b745b69d9-zm9cl\" (UID: \"e7f6648a-8487-415a-bdd3-16a27fea4871\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709517 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1f27dee9-7157-455c-84d5-24c51b874b53-tmpfs\") pod \"packageserver-d55dfcdfc-85jmr\" (UID: \"1f27dee9-7157-455c-84d5-24c51b874b53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709541 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c3d88dfe-fa31-4759-baa6-6c847eb53020-stats-auth\") pod \"router-default-5444994796-flzss\" (UID: \"c3d88dfe-fa31-4759-baa6-6c847eb53020\") " pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709565 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8b185e5-e51b-4945-baca-221a382c0714-serving-cert\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709613 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96bbdc9d-911c-4916-a775-6ad2f827f831-service-ca-bundle\") pod \"authentication-operator-69f744f599-gr4ld\" (UID: \"96bbdc9d-911c-4916-a775-6ad2f827f831\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709635 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3cfa026c-5ae3-47cc-aee8-b06522339617-metrics-tls\") pod \"dns-default-h72pd\" (UID: \"3cfa026c-5ae3-47cc-aee8-b06522339617\") " pod="openshift-dns/dns-default-h72pd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709657 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwqmp\" (UniqueName: \"kubernetes.io/projected/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-kube-api-access-gwqmp\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709686 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-config\") pod \"controller-manager-879f6c89f-8wwr8\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709705 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cadfb13c-1ae0-4e22-b3df-fc477f51d4dd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dxvqz\" (UID: \"cadfb13c-1ae0-4e22-b3df-fc477f51d4dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709728 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b4e4e9-c586-4aad-a2d0-220cc1bc9f43-config\") pod \"kube-apiserver-operator-766d6c64bb-4d9tw\" (UID: \"18b4e4e9-c586-4aad-a2d0-220cc1bc9f43\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709754 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96bbdc9d-911c-4916-a775-6ad2f827f831-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gr4ld\" (UID: \"96bbdc9d-911c-4916-a775-6ad2f827f831\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709774 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8b185e5-e51b-4945-baca-221a382c0714-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709792 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e8b185e5-e51b-4945-baca-221a382c0714-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.710395 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96bbdc9d-911c-4916-a775-6ad2f827f831-serving-cert\") pod \"authentication-operator-69f744f599-gr4ld\" (UID: \"96bbdc9d-911c-4916-a775-6ad2f827f831\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.711263 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-service-ca\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.711911 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-config\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.709830 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18b4e4e9-c586-4aad-a2d0-220cc1bc9f43-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4d9tw\" (UID: \"18b4e4e9-c586-4aad-a2d0-220cc1bc9f43\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.712162 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/29dd11e8-40b1-485b-a1c7-4df44220d7b0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ddv82\" (UID: \"29dd11e8-40b1-485b-a1c7-4df44220d7b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.712264 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab0b2db-189a-44e6-a904-49c45fca1a3e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8pb5v\" (UID: \"2ab0b2db-189a-44e6-a904-49c45fca1a3e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.712363 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2314f111-b042-40c3-832c-1c0d49c5e088-serving-cert\") pod \"console-operator-58897d9998-9zsgd\" (UID: \"2314f111-b042-40c3-832c-1c0d49c5e088\") " pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.712480 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-plugins-dir\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.712595 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3d88dfe-fa31-4759-baa6-6c847eb53020-service-ca-bundle\") pod \"router-default-5444994796-flzss\" (UID: \"c3d88dfe-fa31-4759-baa6-6c847eb53020\") " pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.712706 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3d88dfe-fa31-4759-baa6-6c847eb53020-metrics-certs\") pod \"router-default-5444994796-flzss\" (UID: \"c3d88dfe-fa31-4759-baa6-6c847eb53020\") " pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.712812 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a72c5f55-1631-4f1a-8eb8-0c01edbdea67-serving-cert\") pod \"service-ca-operator-777779d784-5vdt8\" (UID: \"a72c5f55-1631-4f1a-8eb8-0c01edbdea67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.712884 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-oauth-config\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.713005 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1609f508-67f3-4209-b9f3-e2195456befe-node-bootstrap-token\") pod \"machine-config-server-kj9rl\" (UID: \"1609f508-67f3-4209-b9f3-e2195456befe\") " pod="openshift-machine-config-operator/machine-config-server-kj9rl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.713125 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a0fccbe-ade0-4666-8758-d67f3c74e8e7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dvfn7\" (UID: \"4a0fccbe-ade0-4666-8758-d67f3c74e8e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dvfn7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.713237 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-oauth-serving-cert\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.713338 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qkdm\" (UniqueName: \"kubernetes.io/projected/13016872-91b5-446f-a10a-93e366928c47-kube-api-access-6qkdm\") pod \"package-server-manager-789f6589d5-n8jqm\" (UID: \"13016872-91b5-446f-a10a-93e366928c47\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.713458 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4f1b9f2-8afc-4817-a97b-4788f99674fe-etcd-client\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.713558 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e9eff9a-660a-450b-9c63-c473634e7d0a-secret-volume\") pod \"collect-profiles-29421315-g667j\" (UID: \"8e9eff9a-660a-450b-9c63-c473634e7d0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.713676 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxbgc\" (UniqueName: \"kubernetes.io/projected/43fc7f6a-abb1-476f-bfb3-15ce82c13f41-kube-api-access-kxbgc\") pod \"machine-config-operator-74547568cd-lchf7\" (UID: \"43fc7f6a-abb1-476f-bfb3-15ce82c13f41\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.713985 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79459db6-26fd-4be1-a0f6-ac4217a8229c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-grhwc\" (UID: \"79459db6-26fd-4be1-a0f6-ac4217a8229c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.714209 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9p2z\" (UniqueName: \"kubernetes.io/projected/b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f-kube-api-access-m9p2z\") pod \"machine-config-controller-84d6567774-pwnk8\" (UID: \"b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.714351 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2314f111-b042-40c3-832c-1c0d49c5e088-config\") pod \"console-operator-58897d9998-9zsgd\" (UID: \"2314f111-b042-40c3-832c-1c0d49c5e088\") " pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.715166 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96bbdc9d-911c-4916-a775-6ad2f827f831-service-ca-bundle\") pod \"authentication-operator-69f744f599-gr4ld\" (UID: \"96bbdc9d-911c-4916-a775-6ad2f827f831\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.715351 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-client-ca\") pod \"controller-manager-879f6c89f-8wwr8\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.715488 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4ef19983-4775-4438-83a8-f8279c96959c-profile-collector-cert\") pod \"catalog-operator-68c6474976-szrq9\" (UID: \"4ef19983-4775-4438-83a8-f8279c96959c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.715625 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cfa026c-5ae3-47cc-aee8-b06522339617-config-volume\") pod \"dns-default-h72pd\" (UID: \"3cfa026c-5ae3-47cc-aee8-b06522339617\") " pod="openshift-dns/dns-default-h72pd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.715773 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8dkm\" (UniqueName: \"kubernetes.io/projected/79459db6-26fd-4be1-a0f6-ac4217a8229c-kube-api-access-c8dkm\") pod \"openshift-apiserver-operator-796bbdcf4f-grhwc\" (UID: \"79459db6-26fd-4be1-a0f6-ac4217a8229c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.715889 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8b185e5-e51b-4945-baca-221a382c0714-audit-dir\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.715987 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a4f1b9f2-8afc-4817-a97b-4788f99674fe-etcd-ca\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.716081 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7rll\" (UniqueName: \"kubernetes.io/projected/96bbdc9d-911c-4916-a775-6ad2f827f831-kube-api-access-n7rll\") pod \"authentication-operator-69f744f599-gr4ld\" (UID: \"96bbdc9d-911c-4916-a775-6ad2f827f831\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.716167 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e8b185e5-e51b-4945-baca-221a382c0714-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.715783 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-config\") pod \"controller-manager-879f6c89f-8wwr8\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.715957 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8b185e5-e51b-4945-baca-221a382c0714-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.711975 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ecade532-431e-464e-af8f-bdb1fe23ec47-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xtskh\" (UID: \"ecade532-431e-464e-af8f-bdb1fe23ec47\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.716875 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cfa026c-5ae3-47cc-aee8-b06522339617-config-volume\") pod \"dns-default-h72pd\" (UID: \"3cfa026c-5ae3-47cc-aee8-b06522339617\") " pod="openshift-dns/dns-default-h72pd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.717045 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-oauth-serving-cert\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.717682 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8b185e5-e51b-4945-baca-221a382c0714-audit-dir\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.717857 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a4f1b9f2-8afc-4817-a97b-4788f99674fe-etcd-service-ca\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.717900 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4dcf\" (UniqueName: \"kubernetes.io/projected/a4f1b9f2-8afc-4817-a97b-4788f99674fe-kube-api-access-z4dcf\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.717969 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmcd\" (UniqueName: \"kubernetes.io/projected/a477726d-aae1-47d9-8a3a-70316f991c29-kube-api-access-6fmcd\") pod \"dns-operator-744455d44c-njgnq\" (UID: \"a477726d-aae1-47d9-8a3a-70316f991c29\") " pod="openshift-dns-operator/dns-operator-744455d44c-njgnq" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.718001 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e8b185e5-e51b-4945-baca-221a382c0714-encryption-config\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721004 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f1b9f2-8afc-4817-a97b-4788f99674fe-config\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721058 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4gxc\" (UniqueName: \"kubernetes.io/projected/e7f6648a-8487-415a-bdd3-16a27fea4871-kube-api-access-n4gxc\") pod \"ingress-operator-5b745b69d9-zm9cl\" (UID: \"e7f6648a-8487-415a-bdd3-16a27fea4871\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721116 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e8b185e5-e51b-4945-baca-221a382c0714-audit-policies\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721146 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-serving-cert\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721176 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdtj2\" (UniqueName: \"kubernetes.io/projected/1f27dee9-7157-455c-84d5-24c51b874b53-kube-api-access-tdtj2\") pod \"packageserver-d55dfcdfc-85jmr\" (UID: \"1f27dee9-7157-455c-84d5-24c51b874b53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721208 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/43fc7f6a-abb1-476f-bfb3-15ce82c13f41-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lchf7\" (UID: \"43fc7f6a-abb1-476f-bfb3-15ce82c13f41\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721242 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3e7c4a38-1f7c-4cb1-b757-8250869e1597-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7xksm\" (UID: \"3e7c4a38-1f7c-4cb1-b757-8250869e1597\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.719549 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a4f1b9f2-8afc-4817-a97b-4788f99674fe-etcd-service-ca\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721329 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-serving-cert\") pod \"controller-manager-879f6c89f-8wwr8\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721393 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8wwr8\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721699 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f27dee9-7157-455c-84d5-24c51b874b53-webhook-cert\") pod \"packageserver-d55dfcdfc-85jmr\" (UID: \"1f27dee9-7157-455c-84d5-24c51b874b53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721742 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e8b185e5-e51b-4945-baca-221a382c0714-etcd-client\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721772 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cadfb13c-1ae0-4e22-b3df-fc477f51d4dd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dxvqz\" (UID: \"cadfb13c-1ae0-4e22-b3df-fc477f51d4dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721866 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79459db6-26fd-4be1-a0f6-ac4217a8229c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-grhwc\" (UID: \"79459db6-26fd-4be1-a0f6-ac4217a8229c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721909 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a477726d-aae1-47d9-8a3a-70316f991c29-metrics-tls\") pod \"dns-operator-744455d44c-njgnq\" (UID: \"a477726d-aae1-47d9-8a3a-70316f991c29\") " pod="openshift-dns-operator/dns-operator-744455d44c-njgnq" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721957 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9tff\" (UniqueName: \"kubernetes.io/projected/4e1464d3-a5a6-4fc8-a091-77f41d391939-kube-api-access-t9tff\") pod \"service-ca-9c57cc56f-fvjw7\" (UID: \"4e1464d3-a5a6-4fc8-a091-77f41d391939\") " pod="openshift-service-ca/service-ca-9c57cc56f-fvjw7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.721989 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a9eab53b-c723-460f-b55e-88b441b25a76-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-z4fst\" (UID: \"a9eab53b-c723-460f-b55e-88b441b25a76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z4fst" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722022 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db29ce09-9dfc-44aa-9eec-3a431d33e0e6-serving-cert\") pod \"openshift-config-operator-7777fb866f-zjfc7\" (UID: \"db29ce09-9dfc-44aa-9eec-3a431d33e0e6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722086 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpf62\" (UniqueName: \"kubernetes.io/projected/a9eab53b-c723-460f-b55e-88b441b25a76-kube-api-access-qpf62\") pod \"cluster-samples-operator-665b6dd947-z4fst\" (UID: \"a9eab53b-c723-460f-b55e-88b441b25a76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z4fst" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722114 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e7f6648a-8487-415a-bdd3-16a27fea4871-metrics-tls\") pod \"ingress-operator-5b745b69d9-zm9cl\" (UID: \"e7f6648a-8487-415a-bdd3-16a27fea4871\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722142 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b2xv\" (UniqueName: \"kubernetes.io/projected/4ef19983-4775-4438-83a8-f8279c96959c-kube-api-access-2b2xv\") pod \"catalog-operator-68c6474976-szrq9\" (UID: \"4ef19983-4775-4438-83a8-f8279c96959c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722175 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3e7c4a38-1f7c-4cb1-b757-8250869e1597-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7xksm\" (UID: \"3e7c4a38-1f7c-4cb1-b757-8250869e1597\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722207 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfbc3e4e-babb-4119-9343-68c87540802e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8g6j2\" (UID: \"dfbc3e4e-babb-4119-9343-68c87540802e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722237 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfbc3e4e-babb-4119-9343-68c87540802e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8g6j2\" (UID: \"dfbc3e4e-babb-4119-9343-68c87540802e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722264 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/13016872-91b5-446f-a10a-93e366928c47-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n8jqm\" (UID: \"13016872-91b5-446f-a10a-93e366928c47\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722297 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfdd2\" (UniqueName: \"kubernetes.io/projected/3e7c4a38-1f7c-4cb1-b757-8250869e1597-kube-api-access-zfdd2\") pod \"marketplace-operator-79b997595-7xksm\" (UID: \"3e7c4a38-1f7c-4cb1-b757-8250869e1597\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722339 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78vcz\" (UniqueName: \"kubernetes.io/projected/cadfb13c-1ae0-4e22-b3df-fc477f51d4dd-kube-api-access-78vcz\") pod \"openshift-controller-manager-operator-756b6f6bc6-dxvqz\" (UID: \"cadfb13c-1ae0-4e22-b3df-fc477f51d4dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722371 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-mountpoint-dir\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722399 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c3d88dfe-fa31-4759-baa6-6c847eb53020-default-certificate\") pod \"router-default-5444994796-flzss\" (UID: \"c3d88dfe-fa31-4759-baa6-6c847eb53020\") " pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722445 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-trusted-ca-bundle\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722463 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f1b9f2-8afc-4817-a97b-4788f99674fe-config\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722475 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgrqm\" (UniqueName: \"kubernetes.io/projected/f1b338b8-701c-4c0e-87ba-f830190df7eb-kube-api-access-pgrqm\") pod \"ingress-canary-n726k\" (UID: \"f1b338b8-701c-4c0e-87ba-f830190df7eb\") " pod="openshift-ingress-canary/ingress-canary-n726k" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722542 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg8dz\" (UniqueName: \"kubernetes.io/projected/ecade532-431e-464e-af8f-bdb1fe23ec47-kube-api-access-pg8dz\") pod \"cluster-image-registry-operator-dc59b4c8b-xtskh\" (UID: \"ecade532-431e-464e-af8f-bdb1fe23ec47\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722574 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwjcj\" (UniqueName: \"kubernetes.io/projected/8e9eff9a-660a-450b-9c63-c473634e7d0a-kube-api-access-pwjcj\") pod \"collect-profiles-29421315-g667j\" (UID: \"8e9eff9a-660a-450b-9c63-c473634e7d0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722613 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2314f111-b042-40c3-832c-1c0d49c5e088-trusted-ca\") pod \"console-operator-58897d9998-9zsgd\" (UID: \"2314f111-b042-40c3-832c-1c0d49c5e088\") " pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722643 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72c5f55-1631-4f1a-8eb8-0c01edbdea67-config\") pod \"service-ca-operator-777779d784-5vdt8\" (UID: \"a72c5f55-1631-4f1a-8eb8-0c01edbdea67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722667 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw8zn\" (UniqueName: \"kubernetes.io/projected/1609f508-67f3-4209-b9f3-e2195456befe-kube-api-access-rw8zn\") pod \"machine-config-server-kj9rl\" (UID: \"1609f508-67f3-4209-b9f3-e2195456befe\") " pod="openshift-machine-config-operator/machine-config-server-kj9rl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722697 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b4e4e9-c586-4aad-a2d0-220cc1bc9f43-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4d9tw\" (UID: \"18b4e4e9-c586-4aad-a2d0-220cc1bc9f43\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722726 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brhnq\" (UniqueName: \"kubernetes.io/projected/3cfa026c-5ae3-47cc-aee8-b06522339617-kube-api-access-brhnq\") pod \"dns-default-h72pd\" (UID: \"3cfa026c-5ae3-47cc-aee8-b06522339617\") " pod="openshift-dns/dns-default-h72pd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722757 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwkrn\" (UniqueName: \"kubernetes.io/projected/29dd11e8-40b1-485b-a1c7-4df44220d7b0-kube-api-access-dwkrn\") pod \"olm-operator-6b444d44fb-ddv82\" (UID: \"29dd11e8-40b1-485b-a1c7-4df44220d7b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722780 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4ef19983-4775-4438-83a8-f8279c96959c-srv-cert\") pod \"catalog-operator-68c6474976-szrq9\" (UID: \"4ef19983-4775-4438-83a8-f8279c96959c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722807 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e9eff9a-660a-450b-9c63-c473634e7d0a-config-volume\") pod \"collect-profiles-29421315-g667j\" (UID: \"8e9eff9a-660a-450b-9c63-c473634e7d0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722845 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgrsl\" (UniqueName: \"kubernetes.io/projected/c61bb947-7202-4145-99f8-5060296a1dc9-kube-api-access-fgrsl\") pod \"migrator-59844c95c7-vlzz2\" (UID: \"c61bb947-7202-4145-99f8-5060296a1dc9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vlzz2" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722872 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-socket-dir\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722895 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj2tj\" (UniqueName: \"kubernetes.io/projected/a72c5f55-1631-4f1a-8eb8-0c01edbdea67-kube-api-access-xj2tj\") pod \"service-ca-operator-777779d784-5vdt8\" (UID: \"a72c5f55-1631-4f1a-8eb8-0c01edbdea67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722924 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2d553207-9e63-4091-955d-35b3a8625ddb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-665jx\" (UID: \"2d553207-9e63-4091-955d-35b3a8625ddb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-665jx" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722940 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96bbdc9d-911c-4916-a775-6ad2f827f831-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gr4ld\" (UID: \"96bbdc9d-911c-4916-a775-6ad2f827f831\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722948 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1609f508-67f3-4209-b9f3-e2195456befe-certs\") pod \"machine-config-server-kj9rl\" (UID: \"1609f508-67f3-4209-b9f3-e2195456befe\") " pod="openshift-machine-config-operator/machine-config-server-kj9rl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.722996 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-registration-dir\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.723040 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7f6648a-8487-415a-bdd3-16a27fea4871-trusted-ca\") pod \"ingress-operator-5b745b69d9-zm9cl\" (UID: \"e7f6648a-8487-415a-bdd3-16a27fea4871\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.723397 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e8b185e5-e51b-4945-baca-221a382c0714-audit-policies\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.723774 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cadfb13c-1ae0-4e22-b3df-fc477f51d4dd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dxvqz\" (UID: \"cadfb13c-1ae0-4e22-b3df-fc477f51d4dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.724069 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a4f1b9f2-8afc-4817-a97b-4788f99674fe-etcd-ca\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.724174 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pwnk8\" (UID: \"b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.724568 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f-proxy-tls\") pod \"machine-config-controller-84d6567774-pwnk8\" (UID: \"b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.724952 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cadfb13c-1ae0-4e22-b3df-fc477f51d4dd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dxvqz\" (UID: \"cadfb13c-1ae0-4e22-b3df-fc477f51d4dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.725594 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e8b185e5-e51b-4945-baca-221a382c0714-encryption-config\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.727061 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e9eff9a-660a-450b-9c63-c473634e7d0a-config-volume\") pod \"collect-profiles-29421315-g667j\" (UID: \"8e9eff9a-660a-450b-9c63-c473634e7d0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.729044 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8wwr8\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.729104 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-trusted-ca-bundle\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.729718 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e7f6648a-8487-415a-bdd3-16a27fea4871-metrics-tls\") pod \"ingress-operator-5b745b69d9-zm9cl\" (UID: \"e7f6648a-8487-415a-bdd3-16a27fea4871\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.731267 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3cfa026c-5ae3-47cc-aee8-b06522339617-metrics-tls\") pod \"dns-default-h72pd\" (UID: \"3cfa026c-5ae3-47cc-aee8-b06522339617\") " pod="openshift-dns/dns-default-h72pd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.731393 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e9eff9a-660a-450b-9c63-c473634e7d0a-secret-volume\") pod \"collect-profiles-29421315-g667j\" (UID: \"8e9eff9a-660a-450b-9c63-c473634e7d0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.731699 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8b185e5-e51b-4945-baca-221a382c0714-serving-cert\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.731780 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e8b185e5-e51b-4945-baca-221a382c0714-etcd-client\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.732268 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4f1b9f2-8afc-4817-a97b-4788f99674fe-etcd-client\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.734298 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a477726d-aae1-47d9-8a3a-70316f991c29-metrics-tls\") pod \"dns-operator-744455d44c-njgnq\" (UID: \"a477726d-aae1-47d9-8a3a-70316f991c29\") " pod="openshift-dns-operator/dns-operator-744455d44c-njgnq" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.734537 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db29ce09-9dfc-44aa-9eec-3a431d33e0e6-serving-cert\") pod \"openshift-config-operator-7777fb866f-zjfc7\" (UID: \"db29ce09-9dfc-44aa-9eec-3a431d33e0e6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.734939 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a9eab53b-c723-460f-b55e-88b441b25a76-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-z4fst\" (UID: \"a9eab53b-c723-460f-b55e-88b441b25a76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z4fst" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.735114 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-serving-cert\") pod \"controller-manager-879f6c89f-8wwr8\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.736579 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-serving-cert\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.737619 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1b338b8-701c-4c0e-87ba-f830190df7eb-cert\") pod \"ingress-canary-n726k\" (UID: \"f1b338b8-701c-4c0e-87ba-f830190df7eb\") " pod="openshift-ingress-canary/ingress-canary-n726k" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.743323 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh86z\" (UniqueName: \"kubernetes.io/projected/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-kube-api-access-lh86z\") pod \"console-f9d7485db-l6kz7\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.743669 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.764317 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhlf9\" (UniqueName: \"kubernetes.io/projected/db29ce09-9dfc-44aa-9eec-3a431d33e0e6-kube-api-access-dhlf9\") pod \"openshift-config-operator-7777fb866f-zjfc7\" (UID: \"db29ce09-9dfc-44aa-9eec-3a431d33e0e6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.780703 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4md5m\" (UniqueName: \"kubernetes.io/projected/8948a613-56f3-4a89-adb7-2c4a2262f2ee-kube-api-access-4md5m\") pod \"downloads-7954f5f757-74c2r\" (UID: \"8948a613-56f3-4a89-adb7-2c4a2262f2ee\") " pod="openshift-console/downloads-7954f5f757-74c2r" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.821656 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.821775 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn8mh\" (UniqueName: \"kubernetes.io/projected/e8b185e5-e51b-4945-baca-221a382c0714-kube-api-access-pn8mh\") pod \"apiserver-7bbb656c7d-k9zjm\" (UID: \"e8b185e5-e51b-4945-baca-221a382c0714\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846176 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846533 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-csi-data-dir\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846579 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1f27dee9-7157-455c-84d5-24c51b874b53-tmpfs\") pod \"packageserver-d55dfcdfc-85jmr\" (UID: \"1f27dee9-7157-455c-84d5-24c51b874b53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846605 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c3d88dfe-fa31-4759-baa6-6c847eb53020-stats-auth\") pod \"router-default-5444994796-flzss\" (UID: \"c3d88dfe-fa31-4759-baa6-6c847eb53020\") " pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846631 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwqmp\" (UniqueName: \"kubernetes.io/projected/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-kube-api-access-gwqmp\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846681 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b4e4e9-c586-4aad-a2d0-220cc1bc9f43-config\") pod \"kube-apiserver-operator-766d6c64bb-4d9tw\" (UID: \"18b4e4e9-c586-4aad-a2d0-220cc1bc9f43\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846697 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/29dd11e8-40b1-485b-a1c7-4df44220d7b0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ddv82\" (UID: \"29dd11e8-40b1-485b-a1c7-4df44220d7b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846717 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab0b2db-189a-44e6-a904-49c45fca1a3e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8pb5v\" (UID: \"2ab0b2db-189a-44e6-a904-49c45fca1a3e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846749 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18b4e4e9-c586-4aad-a2d0-220cc1bc9f43-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4d9tw\" (UID: \"18b4e4e9-c586-4aad-a2d0-220cc1bc9f43\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846765 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2314f111-b042-40c3-832c-1c0d49c5e088-serving-cert\") pod \"console-operator-58897d9998-9zsgd\" (UID: \"2314f111-b042-40c3-832c-1c0d49c5e088\") " pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846780 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-plugins-dir\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846797 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3d88dfe-fa31-4759-baa6-6c847eb53020-service-ca-bundle\") pod \"router-default-5444994796-flzss\" (UID: \"c3d88dfe-fa31-4759-baa6-6c847eb53020\") " pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846813 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3d88dfe-fa31-4759-baa6-6c847eb53020-metrics-certs\") pod \"router-default-5444994796-flzss\" (UID: \"c3d88dfe-fa31-4759-baa6-6c847eb53020\") " pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846834 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1609f508-67f3-4209-b9f3-e2195456befe-node-bootstrap-token\") pod \"machine-config-server-kj9rl\" (UID: \"1609f508-67f3-4209-b9f3-e2195456befe\") " pod="openshift-machine-config-operator/machine-config-server-kj9rl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846850 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a72c5f55-1631-4f1a-8eb8-0c01edbdea67-serving-cert\") pod \"service-ca-operator-777779d784-5vdt8\" (UID: \"a72c5f55-1631-4f1a-8eb8-0c01edbdea67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846871 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a0fccbe-ade0-4666-8758-d67f3c74e8e7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dvfn7\" (UID: \"4a0fccbe-ade0-4666-8758-d67f3c74e8e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dvfn7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846891 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qkdm\" (UniqueName: \"kubernetes.io/projected/13016872-91b5-446f-a10a-93e366928c47-kube-api-access-6qkdm\") pod \"package-server-manager-789f6589d5-n8jqm\" (UID: \"13016872-91b5-446f-a10a-93e366928c47\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846909 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxbgc\" (UniqueName: \"kubernetes.io/projected/43fc7f6a-abb1-476f-bfb3-15ce82c13f41-kube-api-access-kxbgc\") pod \"machine-config-operator-74547568cd-lchf7\" (UID: \"43fc7f6a-abb1-476f-bfb3-15ce82c13f41\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846924 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79459db6-26fd-4be1-a0f6-ac4217a8229c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-grhwc\" (UID: \"79459db6-26fd-4be1-a0f6-ac4217a8229c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846962 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2314f111-b042-40c3-832c-1c0d49c5e088-config\") pod \"console-operator-58897d9998-9zsgd\" (UID: \"2314f111-b042-40c3-832c-1c0d49c5e088\") " pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.846979 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4ef19983-4775-4438-83a8-f8279c96959c-profile-collector-cert\") pod \"catalog-operator-68c6474976-szrq9\" (UID: \"4ef19983-4775-4438-83a8-f8279c96959c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847006 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8dkm\" (UniqueName: \"kubernetes.io/projected/79459db6-26fd-4be1-a0f6-ac4217a8229c-kube-api-access-c8dkm\") pod \"openshift-apiserver-operator-796bbdcf4f-grhwc\" (UID: \"79459db6-26fd-4be1-a0f6-ac4217a8229c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847056 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdtj2\" (UniqueName: \"kubernetes.io/projected/1f27dee9-7157-455c-84d5-24c51b874b53-kube-api-access-tdtj2\") pod \"packageserver-d55dfcdfc-85jmr\" (UID: \"1f27dee9-7157-455c-84d5-24c51b874b53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847072 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/43fc7f6a-abb1-476f-bfb3-15ce82c13f41-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lchf7\" (UID: \"43fc7f6a-abb1-476f-bfb3-15ce82c13f41\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847090 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3e7c4a38-1f7c-4cb1-b757-8250869e1597-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7xksm\" (UID: \"3e7c4a38-1f7c-4cb1-b757-8250869e1597\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847136 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f27dee9-7157-455c-84d5-24c51b874b53-webhook-cert\") pod \"packageserver-d55dfcdfc-85jmr\" (UID: \"1f27dee9-7157-455c-84d5-24c51b874b53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847152 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79459db6-26fd-4be1-a0f6-ac4217a8229c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-grhwc\" (UID: \"79459db6-26fd-4be1-a0f6-ac4217a8229c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847169 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9tff\" (UniqueName: \"kubernetes.io/projected/4e1464d3-a5a6-4fc8-a091-77f41d391939-kube-api-access-t9tff\") pod \"service-ca-9c57cc56f-fvjw7\" (UID: \"4e1464d3-a5a6-4fc8-a091-77f41d391939\") " pod="openshift-service-ca/service-ca-9c57cc56f-fvjw7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847193 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b2xv\" (UniqueName: \"kubernetes.io/projected/4ef19983-4775-4438-83a8-f8279c96959c-kube-api-access-2b2xv\") pod \"catalog-operator-68c6474976-szrq9\" (UID: \"4ef19983-4775-4438-83a8-f8279c96959c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847209 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3e7c4a38-1f7c-4cb1-b757-8250869e1597-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7xksm\" (UID: \"3e7c4a38-1f7c-4cb1-b757-8250869e1597\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847225 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfbc3e4e-babb-4119-9343-68c87540802e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8g6j2\" (UID: \"dfbc3e4e-babb-4119-9343-68c87540802e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847210 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-csi-data-dir\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847241 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfbc3e4e-babb-4119-9343-68c87540802e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8g6j2\" (UID: \"dfbc3e4e-babb-4119-9343-68c87540802e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847259 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/13016872-91b5-446f-a10a-93e366928c47-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n8jqm\" (UID: \"13016872-91b5-446f-a10a-93e366928c47\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847277 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfdd2\" (UniqueName: \"kubernetes.io/projected/3e7c4a38-1f7c-4cb1-b757-8250869e1597-kube-api-access-zfdd2\") pod \"marketplace-operator-79b997595-7xksm\" (UID: \"3e7c4a38-1f7c-4cb1-b757-8250869e1597\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847304 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-mountpoint-dir\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847320 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c3d88dfe-fa31-4759-baa6-6c847eb53020-default-certificate\") pod \"router-default-5444994796-flzss\" (UID: \"c3d88dfe-fa31-4759-baa6-6c847eb53020\") " pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847342 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2314f111-b042-40c3-832c-1c0d49c5e088-trusted-ca\") pod \"console-operator-58897d9998-9zsgd\" (UID: \"2314f111-b042-40c3-832c-1c0d49c5e088\") " pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847364 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72c5f55-1631-4f1a-8eb8-0c01edbdea67-config\") pod \"service-ca-operator-777779d784-5vdt8\" (UID: \"a72c5f55-1631-4f1a-8eb8-0c01edbdea67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847385 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw8zn\" (UniqueName: \"kubernetes.io/projected/1609f508-67f3-4209-b9f3-e2195456befe-kube-api-access-rw8zn\") pod \"machine-config-server-kj9rl\" (UID: \"1609f508-67f3-4209-b9f3-e2195456befe\") " pod="openshift-machine-config-operator/machine-config-server-kj9rl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847401 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b4e4e9-c586-4aad-a2d0-220cc1bc9f43-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4d9tw\" (UID: \"18b4e4e9-c586-4aad-a2d0-220cc1bc9f43\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847455 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4ef19983-4775-4438-83a8-f8279c96959c-srv-cert\") pod \"catalog-operator-68c6474976-szrq9\" (UID: \"4ef19983-4775-4438-83a8-f8279c96959c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847472 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwkrn\" (UniqueName: \"kubernetes.io/projected/29dd11e8-40b1-485b-a1c7-4df44220d7b0-kube-api-access-dwkrn\") pod \"olm-operator-6b444d44fb-ddv82\" (UID: \"29dd11e8-40b1-485b-a1c7-4df44220d7b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847488 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-socket-dir\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847504 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgrsl\" (UniqueName: \"kubernetes.io/projected/c61bb947-7202-4145-99f8-5060296a1dc9-kube-api-access-fgrsl\") pod \"migrator-59844c95c7-vlzz2\" (UID: \"c61bb947-7202-4145-99f8-5060296a1dc9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vlzz2" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847537 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj2tj\" (UniqueName: \"kubernetes.io/projected/a72c5f55-1631-4f1a-8eb8-0c01edbdea67-kube-api-access-xj2tj\") pod \"service-ca-operator-777779d784-5vdt8\" (UID: \"a72c5f55-1631-4f1a-8eb8-0c01edbdea67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847553 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1609f508-67f3-4209-b9f3-e2195456befe-certs\") pod \"machine-config-server-kj9rl\" (UID: \"1609f508-67f3-4209-b9f3-e2195456befe\") " pod="openshift-machine-config-operator/machine-config-server-kj9rl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847567 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-registration-dir\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847584 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2d553207-9e63-4091-955d-35b3a8625ddb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-665jx\" (UID: \"2d553207-9e63-4091-955d-35b3a8625ddb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-665jx" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847603 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847620 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1f27dee9-7157-455c-84d5-24c51b874b53-apiservice-cert\") pod \"packageserver-d55dfcdfc-85jmr\" (UID: \"1f27dee9-7157-455c-84d5-24c51b874b53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847653 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/29dd11e8-40b1-485b-a1c7-4df44220d7b0-srv-cert\") pod \"olm-operator-6b444d44fb-ddv82\" (UID: \"29dd11e8-40b1-485b-a1c7-4df44220d7b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847668 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/43fc7f6a-abb1-476f-bfb3-15ce82c13f41-images\") pod \"machine-config-operator-74547568cd-lchf7\" (UID: \"43fc7f6a-abb1-476f-bfb3-15ce82c13f41\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847682 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxtms\" (UniqueName: \"kubernetes.io/projected/2314f111-b042-40c3-832c-1c0d49c5e088-kube-api-access-rxtms\") pod \"console-operator-58897d9998-9zsgd\" (UID: \"2314f111-b042-40c3-832c-1c0d49c5e088\") " pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847698 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtfzb\" (UniqueName: \"kubernetes.io/projected/4a0fccbe-ade0-4666-8758-d67f3c74e8e7-kube-api-access-xtfzb\") pod \"control-plane-machine-set-operator-78cbb6b69f-dvfn7\" (UID: \"4a0fccbe-ade0-4666-8758-d67f3c74e8e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dvfn7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847716 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bljt2\" (UniqueName: \"kubernetes.io/projected/2d553207-9e63-4091-955d-35b3a8625ddb-kube-api-access-bljt2\") pod \"multus-admission-controller-857f4d67dd-665jx\" (UID: \"2d553207-9e63-4091-955d-35b3a8625ddb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-665jx" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847739 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab0b2db-189a-44e6-a904-49c45fca1a3e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8pb5v\" (UID: \"2ab0b2db-189a-44e6-a904-49c45fca1a3e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847758 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/43fc7f6a-abb1-476f-bfb3-15ce82c13f41-proxy-tls\") pod \"machine-config-operator-74547568cd-lchf7\" (UID: \"43fc7f6a-abb1-476f-bfb3-15ce82c13f41\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847773 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ab0b2db-189a-44e6-a904-49c45fca1a3e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8pb5v\" (UID: \"2ab0b2db-189a-44e6-a904-49c45fca1a3e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847789 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hjkt\" (UniqueName: \"kubernetes.io/projected/c3d88dfe-fa31-4759-baa6-6c847eb53020-kube-api-access-8hjkt\") pod \"router-default-5444994796-flzss\" (UID: \"c3d88dfe-fa31-4759-baa6-6c847eb53020\") " pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847804 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4e1464d3-a5a6-4fc8-a091-77f41d391939-signing-key\") pod \"service-ca-9c57cc56f-fvjw7\" (UID: \"4e1464d3-a5a6-4fc8-a091-77f41d391939\") " pod="openshift-service-ca/service-ca-9c57cc56f-fvjw7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847828 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfbc3e4e-babb-4119-9343-68c87540802e-config\") pod \"kube-controller-manager-operator-78b949d7b-8g6j2\" (UID: \"dfbc3e4e-babb-4119-9343-68c87540802e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.847844 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4e1464d3-a5a6-4fc8-a091-77f41d391939-signing-cabundle\") pod \"service-ca-9c57cc56f-fvjw7\" (UID: \"4e1464d3-a5a6-4fc8-a091-77f41d391939\") " pod="openshift-service-ca/service-ca-9c57cc56f-fvjw7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.848225 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1f27dee9-7157-455c-84d5-24c51b874b53-tmpfs\") pod \"packageserver-d55dfcdfc-85jmr\" (UID: \"1f27dee9-7157-455c-84d5-24c51b874b53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.849065 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b4e4e9-c586-4aad-a2d0-220cc1bc9f43-config\") pod \"kube-apiserver-operator-766d6c64bb-4d9tw\" (UID: \"18b4e4e9-c586-4aad-a2d0-220cc1bc9f43\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.850453 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2314f111-b042-40c3-832c-1c0d49c5e088-config\") pod \"console-operator-58897d9998-9zsgd\" (UID: \"2314f111-b042-40c3-832c-1c0d49c5e088\") " pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.850483 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4e1464d3-a5a6-4fc8-a091-77f41d391939-signing-cabundle\") pod \"service-ca-9c57cc56f-fvjw7\" (UID: \"4e1464d3-a5a6-4fc8-a091-77f41d391939\") " pod="openshift-service-ca/service-ca-9c57cc56f-fvjw7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.851144 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab0b2db-189a-44e6-a904-49c45fca1a3e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8pb5v\" (UID: \"2ab0b2db-189a-44e6-a904-49c45fca1a3e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v" Dec 09 11:29:22 crc kubenswrapper[4849]: E1209 11:29:22.852284 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:23.352268836 +0000 UTC m=+145.892153242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.854867 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-plugins-dir\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.856295 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-mountpoint-dir\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.856465 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab0b2db-189a-44e6-a904-49c45fca1a3e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8pb5v\" (UID: \"2ab0b2db-189a-44e6-a904-49c45fca1a3e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.857707 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79459db6-26fd-4be1-a0f6-ac4217a8229c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-grhwc\" (UID: \"79459db6-26fd-4be1-a0f6-ac4217a8229c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.857754 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfbc3e4e-babb-4119-9343-68c87540802e-config\") pod \"kube-controller-manager-operator-78b949d7b-8g6j2\" (UID: \"dfbc3e4e-babb-4119-9343-68c87540802e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.858157 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/43fc7f6a-abb1-476f-bfb3-15ce82c13f41-images\") pod \"machine-config-operator-74547568cd-lchf7\" (UID: \"43fc7f6a-abb1-476f-bfb3-15ce82c13f41\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.861077 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3d88dfe-fa31-4759-baa6-6c847eb53020-service-ca-bundle\") pod \"router-default-5444994796-flzss\" (UID: \"c3d88dfe-fa31-4759-baa6-6c847eb53020\") " pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.861205 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1f27dee9-7157-455c-84d5-24c51b874b53-apiservice-cert\") pod \"packageserver-d55dfcdfc-85jmr\" (UID: \"1f27dee9-7157-455c-84d5-24c51b874b53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.861601 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2d553207-9e63-4091-955d-35b3a8625ddb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-665jx\" (UID: \"2d553207-9e63-4091-955d-35b3a8625ddb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-665jx" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.866730 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4e1464d3-a5a6-4fc8-a091-77f41d391939-signing-key\") pod \"service-ca-9c57cc56f-fvjw7\" (UID: \"4e1464d3-a5a6-4fc8-a091-77f41d391939\") " pod="openshift-service-ca/service-ca-9c57cc56f-fvjw7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.867620 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72c5f55-1631-4f1a-8eb8-0c01edbdea67-config\") pod \"service-ca-operator-777779d784-5vdt8\" (UID: \"a72c5f55-1631-4f1a-8eb8-0c01edbdea67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.871391 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2314f111-b042-40c3-832c-1c0d49c5e088-trusted-ca\") pod \"console-operator-58897d9998-9zsgd\" (UID: \"2314f111-b042-40c3-832c-1c0d49c5e088\") " pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.872520 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3d88dfe-fa31-4759-baa6-6c847eb53020-metrics-certs\") pod \"router-default-5444994796-flzss\" (UID: \"c3d88dfe-fa31-4759-baa6-6c847eb53020\") " pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.872868 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-socket-dir\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.873299 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1609f508-67f3-4209-b9f3-e2195456befe-node-bootstrap-token\") pod \"machine-config-server-kj9rl\" (UID: \"1609f508-67f3-4209-b9f3-e2195456befe\") " pod="openshift-machine-config-operator/machine-config-server-kj9rl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.873835 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a72c5f55-1631-4f1a-8eb8-0c01edbdea67-serving-cert\") pod \"service-ca-operator-777779d784-5vdt8\" (UID: \"a72c5f55-1631-4f1a-8eb8-0c01edbdea67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.874175 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79459db6-26fd-4be1-a0f6-ac4217a8229c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-grhwc\" (UID: \"79459db6-26fd-4be1-a0f6-ac4217a8229c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.874658 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3e7c4a38-1f7c-4cb1-b757-8250869e1597-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7xksm\" (UID: \"3e7c4a38-1f7c-4cb1-b757-8250869e1597\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.877283 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1609f508-67f3-4209-b9f3-e2195456befe-certs\") pod \"machine-config-server-kj9rl\" (UID: \"1609f508-67f3-4209-b9f3-e2195456befe\") " pod="openshift-machine-config-operator/machine-config-server-kj9rl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.877361 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-registration-dir\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.878685 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c3d88dfe-fa31-4759-baa6-6c847eb53020-stats-auth\") pod \"router-default-5444994796-flzss\" (UID: \"c3d88dfe-fa31-4759-baa6-6c847eb53020\") " pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.879097 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/29dd11e8-40b1-485b-a1c7-4df44220d7b0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ddv82\" (UID: \"29dd11e8-40b1-485b-a1c7-4df44220d7b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.879589 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2314f111-b042-40c3-832c-1c0d49c5e088-serving-cert\") pod \"console-operator-58897d9998-9zsgd\" (UID: \"2314f111-b042-40c3-832c-1c0d49c5e088\") " pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.880816 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b4e4e9-c586-4aad-a2d0-220cc1bc9f43-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4d9tw\" (UID: \"18b4e4e9-c586-4aad-a2d0-220cc1bc9f43\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.880957 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4ef19983-4775-4438-83a8-f8279c96959c-profile-collector-cert\") pod \"catalog-operator-68c6474976-szrq9\" (UID: \"4ef19983-4775-4438-83a8-f8279c96959c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.881055 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/13016872-91b5-446f-a10a-93e366928c47-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n8jqm\" (UID: \"13016872-91b5-446f-a10a-93e366928c47\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.881577 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c3d88dfe-fa31-4759-baa6-6c847eb53020-default-certificate\") pod \"router-default-5444994796-flzss\" (UID: \"c3d88dfe-fa31-4759-baa6-6c847eb53020\") " pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.882011 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7f6648a-8487-415a-bdd3-16a27fea4871-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zm9cl\" (UID: \"e7f6648a-8487-415a-bdd3-16a27fea4871\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.882030 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/29dd11e8-40b1-485b-a1c7-4df44220d7b0-srv-cert\") pod \"olm-operator-6b444d44fb-ddv82\" (UID: \"29dd11e8-40b1-485b-a1c7-4df44220d7b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.882218 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/43fc7f6a-abb1-476f-bfb3-15ce82c13f41-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lchf7\" (UID: \"43fc7f6a-abb1-476f-bfb3-15ce82c13f41\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.882630 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfbc3e4e-babb-4119-9343-68c87540802e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8g6j2\" (UID: \"dfbc3e4e-babb-4119-9343-68c87540802e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.882811 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/43fc7f6a-abb1-476f-bfb3-15ce82c13f41-proxy-tls\") pod \"machine-config-operator-74547568cd-lchf7\" (UID: \"43fc7f6a-abb1-476f-bfb3-15ce82c13f41\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.883059 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3e7c4a38-1f7c-4cb1-b757-8250869e1597-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7xksm\" (UID: \"3e7c4a38-1f7c-4cb1-b757-8250869e1597\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.884102 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4ef19983-4775-4438-83a8-f8279c96959c-srv-cert\") pod \"catalog-operator-68c6474976-szrq9\" (UID: \"4ef19983-4775-4438-83a8-f8279c96959c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.884147 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f27dee9-7157-455c-84d5-24c51b874b53-webhook-cert\") pod \"packageserver-d55dfcdfc-85jmr\" (UID: \"1f27dee9-7157-455c-84d5-24c51b874b53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.884518 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a0fccbe-ade0-4666-8758-d67f3c74e8e7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dvfn7\" (UID: \"4a0fccbe-ade0-4666-8758-d67f3c74e8e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dvfn7" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.887979 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22s45\" (UniqueName: \"kubernetes.io/projected/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-kube-api-access-22s45\") pod \"controller-manager-879f6c89f-8wwr8\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.894880 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ecade532-431e-464e-af8f-bdb1fe23ec47-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xtskh\" (UID: \"ecade532-431e-464e-af8f-bdb1fe23ec47\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.908068 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7rll\" (UniqueName: \"kubernetes.io/projected/96bbdc9d-911c-4916-a775-6ad2f827f831-kube-api-access-n7rll\") pod \"authentication-operator-69f744f599-gr4ld\" (UID: \"96bbdc9d-911c-4916-a775-6ad2f827f831\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.908432 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.923988 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9p2z\" (UniqueName: \"kubernetes.io/projected/b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f-kube-api-access-m9p2z\") pod \"machine-config-controller-84d6567774-pwnk8\" (UID: \"b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.941586 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-74c2r" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.949159 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:22 crc kubenswrapper[4849]: E1209 11:29:22.949673 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:23.449657813 +0000 UTC m=+145.989542129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.952840 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4dcf\" (UniqueName: \"kubernetes.io/projected/a4f1b9f2-8afc-4817-a97b-4788f99674fe-kube-api-access-z4dcf\") pod \"etcd-operator-b45778765-nshxd\" (UID: \"a4f1b9f2-8afc-4817-a97b-4788f99674fe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:22 crc kubenswrapper[4849]: I1209 11:29:22.970867 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fmcd\" (UniqueName: \"kubernetes.io/projected/a477726d-aae1-47d9-8a3a-70316f991c29-kube-api-access-6fmcd\") pod \"dns-operator-744455d44c-njgnq\" (UID: \"a477726d-aae1-47d9-8a3a-70316f991c29\") " pod="openshift-dns-operator/dns-operator-744455d44c-njgnq" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.023928 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.036387 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.050728 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:23 crc kubenswrapper[4849]: E1209 11:29:23.051138 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:23.551124345 +0000 UTC m=+146.091008661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.151697 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:23 crc kubenswrapper[4849]: E1209 11:29:23.152338 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:23.652316739 +0000 UTC m=+146.192201045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.207349 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.243968 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-njgnq" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.244648 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.253179 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:23 crc kubenswrapper[4849]: E1209 11:29:23.253687 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:23.753675037 +0000 UTC m=+146.293559343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.261422 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4gxc\" (UniqueName: \"kubernetes.io/projected/e7f6648a-8487-415a-bdd3-16a27fea4871-kube-api-access-n4gxc\") pod \"ingress-operator-5b745b69d9-zm9cl\" (UID: \"e7f6648a-8487-415a-bdd3-16a27fea4871\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.291160 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgrqm\" (UniqueName: \"kubernetes.io/projected/f1b338b8-701c-4c0e-87ba-f830190df7eb-kube-api-access-pgrqm\") pod \"ingress-canary-n726k\" (UID: \"f1b338b8-701c-4c0e-87ba-f830190df7eb\") " pod="openshift-ingress-canary/ingress-canary-n726k" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.293324 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bljt2\" (UniqueName: \"kubernetes.io/projected/2d553207-9e63-4091-955d-35b3a8625ddb-kube-api-access-bljt2\") pod \"multus-admission-controller-857f4d67dd-665jx\" (UID: \"2d553207-9e63-4091-955d-35b3a8625ddb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-665jx" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.293493 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwjcj\" (UniqueName: \"kubernetes.io/projected/8e9eff9a-660a-450b-9c63-c473634e7d0a-kube-api-access-pwjcj\") pod \"collect-profiles-29421315-g667j\" (UID: \"8e9eff9a-660a-450b-9c63-c473634e7d0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.297173 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg8dz\" (UniqueName: \"kubernetes.io/projected/ecade532-431e-464e-af8f-bdb1fe23ec47-kube-api-access-pg8dz\") pod \"cluster-image-registry-operator-dc59b4c8b-xtskh\" (UID: \"ecade532-431e-464e-af8f-bdb1fe23ec47\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.297251 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78vcz\" (UniqueName: \"kubernetes.io/projected/cadfb13c-1ae0-4e22-b3df-fc477f51d4dd-kube-api-access-78vcz\") pod \"openshift-controller-manager-operator-756b6f6bc6-dxvqz\" (UID: \"cadfb13c-1ae0-4e22-b3df-fc477f51d4dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.297719 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpf62\" (UniqueName: \"kubernetes.io/projected/a9eab53b-c723-460f-b55e-88b441b25a76-kube-api-access-qpf62\") pod \"cluster-samples-operator-665b6dd947-z4fst\" (UID: \"a9eab53b-c723-460f-b55e-88b441b25a76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z4fst" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.298055 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hjkt\" (UniqueName: \"kubernetes.io/projected/c3d88dfe-fa31-4759-baa6-6c847eb53020-kube-api-access-8hjkt\") pod \"router-default-5444994796-flzss\" (UID: \"c3d88dfe-fa31-4759-baa6-6c847eb53020\") " pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.298248 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qkdm\" (UniqueName: \"kubernetes.io/projected/13016872-91b5-446f-a10a-93e366928c47-kube-api-access-6qkdm\") pod \"package-server-manager-789f6589d5-n8jqm\" (UID: \"13016872-91b5-446f-a10a-93e366928c47\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.306960 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.311219 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxbgc\" (UniqueName: \"kubernetes.io/projected/43fc7f6a-abb1-476f-bfb3-15ce82c13f41-kube-api-access-kxbgc\") pod \"machine-config-operator-74547568cd-lchf7\" (UID: \"43fc7f6a-abb1-476f-bfb3-15ce82c13f41\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.311980 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwqmp\" (UniqueName: \"kubernetes.io/projected/92f2bbfc-50e9-4d11-ac60-28efcfaea5b4-kube-api-access-gwqmp\") pod \"csi-hostpathplugin-qmrg5\" (UID: \"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4\") " pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.312019 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfdd2\" (UniqueName: \"kubernetes.io/projected/3e7c4a38-1f7c-4cb1-b757-8250869e1597-kube-api-access-zfdd2\") pod \"marketplace-operator-79b997595-7xksm\" (UID: \"3e7c4a38-1f7c-4cb1-b757-8250869e1597\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.312501 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.312710 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwkrn\" (UniqueName: \"kubernetes.io/projected/29dd11e8-40b1-485b-a1c7-4df44220d7b0-kube-api-access-dwkrn\") pod \"olm-operator-6b444d44fb-ddv82\" (UID: \"29dd11e8-40b1-485b-a1c7-4df44220d7b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.317769 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18b4e4e9-c586-4aad-a2d0-220cc1bc9f43-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4d9tw\" (UID: \"18b4e4e9-c586-4aad-a2d0-220cc1bc9f43\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.320550 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.341342 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-l6kz7"] Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.341534 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brhnq\" (UniqueName: \"kubernetes.io/projected/3cfa026c-5ae3-47cc-aee8-b06522339617-kube-api-access-brhnq\") pod \"dns-default-h72pd\" (UID: \"3cfa026c-5ae3-47cc-aee8-b06522339617\") " pod="openshift-dns/dns-default-h72pd" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.358973 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-665jx" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.362557 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9tff\" (UniqueName: \"kubernetes.io/projected/4e1464d3-a5a6-4fc8-a091-77f41d391939-kube-api-access-t9tff\") pod \"service-ca-9c57cc56f-fvjw7\" (UID: \"4e1464d3-a5a6-4fc8-a091-77f41d391939\") " pod="openshift-service-ca/service-ca-9c57cc56f-fvjw7" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.380852 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.384524 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw8zn\" (UniqueName: \"kubernetes.io/projected/1609f508-67f3-4209-b9f3-e2195456befe-kube-api-access-rw8zn\") pod \"machine-config-server-kj9rl\" (UID: \"1609f508-67f3-4209-b9f3-e2195456befe\") " pod="openshift-machine-config-operator/machine-config-server-kj9rl" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.392728 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-h72pd" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.418754 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b2xv\" (UniqueName: \"kubernetes.io/projected/4ef19983-4775-4438-83a8-f8279c96959c-kube-api-access-2b2xv\") pod \"catalog-operator-68c6474976-szrq9\" (UID: \"4ef19983-4775-4438-83a8-f8279c96959c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" Dec 09 11:29:23 crc kubenswrapper[4849]: E1209 11:29:23.422069 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:23.922037001 +0000 UTC m=+146.461921307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.422288 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n726k" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.423065 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:23 crc kubenswrapper[4849]: E1209 11:29:23.423374 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:23.923367105 +0000 UTC m=+146.463251421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.425427 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdtj2\" (UniqueName: \"kubernetes.io/projected/1f27dee9-7157-455c-84d5-24c51b874b53-kube-api-access-tdtj2\") pod \"packageserver-d55dfcdfc-85jmr\" (UID: \"1f27dee9-7157-455c-84d5-24c51b874b53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.437222 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z4fst" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.439937 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.441096 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgrsl\" (UniqueName: \"kubernetes.io/projected/c61bb947-7202-4145-99f8-5060296a1dc9-kube-api-access-fgrsl\") pod \"migrator-59844c95c7-vlzz2\" (UID: \"c61bb947-7202-4145-99f8-5060296a1dc9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vlzz2" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.458584 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.473425 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj2tj\" (UniqueName: \"kubernetes.io/projected/a72c5f55-1631-4f1a-8eb8-0c01edbdea67-kube-api-access-xj2tj\") pod \"service-ca-operator-777779d784-5vdt8\" (UID: \"a72c5f55-1631-4f1a-8eb8-0c01edbdea67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.483072 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.483508 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.487547 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8dkm\" (UniqueName: \"kubernetes.io/projected/79459db6-26fd-4be1-a0f6-ac4217a8229c-kube-api-access-c8dkm\") pod \"openshift-apiserver-operator-796bbdcf4f-grhwc\" (UID: \"79459db6-26fd-4be1-a0f6-ac4217a8229c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.490485 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.505106 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vlzz2" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.512839 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxtms\" (UniqueName: \"kubernetes.io/projected/2314f111-b042-40c3-832c-1c0d49c5e088-kube-api-access-rxtms\") pod \"console-operator-58897d9998-9zsgd\" (UID: \"2314f111-b042-40c3-832c-1c0d49c5e088\") " pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.513068 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.516631 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfbc3e4e-babb-4119-9343-68c87540802e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8g6j2\" (UID: \"dfbc3e4e-babb-4119-9343-68c87540802e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.524319 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:23 crc kubenswrapper[4849]: E1209 11:29:23.525111 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:24.025041941 +0000 UTC m=+146.564926257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.525352 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.529043 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ab0b2db-189a-44e6-a904-49c45fca1a3e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8pb5v\" (UID: \"2ab0b2db-189a-44e6-a904-49c45fca1a3e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.529404 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.553753 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtfzb\" (UniqueName: \"kubernetes.io/projected/4a0fccbe-ade0-4666-8758-d67f3c74e8e7-kube-api-access-xtfzb\") pod \"control-plane-machine-set-operator-78cbb6b69f-dvfn7\" (UID: \"4a0fccbe-ade0-4666-8758-d67f3c74e8e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dvfn7" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.559069 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.603670 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dvfn7" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.603753 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.603809 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.603855 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.853108 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.854718 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:23 crc kubenswrapper[4849]: E1209 11:29:23.855111 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:24.355095397 +0000 UTC m=+146.894979713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.855134 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kj9rl" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.855181 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.856689 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fvjw7" Dec 09 11:29:23 crc kubenswrapper[4849]: I1209 11:29:23.856780 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.013157 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:24 crc kubenswrapper[4849]: E1209 11:29:24.013745 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:24.513722212 +0000 UTC m=+147.053606528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.140361 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:24 crc kubenswrapper[4849]: E1209 11:29:24.140952 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:24.640930039 +0000 UTC m=+147.180814355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.206934 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" event={"ID":"ff9d1831-83f7-46b5-a110-4ef163ec3516","Type":"ContainerStarted","Data":"afd6cb7a4933ef64975d981e62fcb54e81c2d71058fc0c2257d3c8d54b8265bb"} Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.209603 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.222898 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" event={"ID":"158b2582-edcf-45dc-908a-28112166eab0","Type":"ContainerStarted","Data":"31afacb3a7591f8ada6b9d6fc061634af9c9fe3e83f1f75d90ef579716462c0b"} Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.224495 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" event={"ID":"d41acaad-c321-4016-8330-f6de9b6e9326","Type":"ContainerStarted","Data":"f6669aca7e09b2cd1fec6243e05db5cf29af6cbffded49235ebd33a12f6931c4"} Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.224523 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" event={"ID":"d41acaad-c321-4016-8330-f6de9b6e9326","Type":"ContainerStarted","Data":"e86cfbcaf91edb247997c19508276ea54df1df06717029f721614467db6eb08d"} Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.265884 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:24 crc kubenswrapper[4849]: E1209 11:29:24.265986 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:24.765967349 +0000 UTC m=+147.305851665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.266357 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.266486 4849 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-25rtx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.5:6443/healthz\": dial tcp 10.217.0.5:6443: connect: connection refused" start-of-body= Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.266549 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" podUID="ff9d1831-83f7-46b5-a110-4ef163ec3516" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.5:6443/healthz\": dial tcp 10.217.0.5:6443: connect: connection refused" Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.267011 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" event={"ID":"c7d83c17-96de-4f5b-b3c0-199d7fa21fab","Type":"ContainerStarted","Data":"d065a8744422e6b208d9ea46d212dff5986c23b689f2032b6a2f549a77a6dada"} Dec 09 11:29:24 crc kubenswrapper[4849]: E1209 11:29:24.271605 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:24.771590173 +0000 UTC m=+147.311474489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.271733 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.279917 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" event={"ID":"adfc03a3-e122-4ebf-b69c-6fdc39087856","Type":"ContainerStarted","Data":"f7188e8efbd0b990273e70682d5e41a64186dcbd50dd6c4e5ea1c484d4d0b661"} Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.290757 4849 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-q5fhv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.290819 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" podUID="c7d83c17-96de-4f5b-b3c0-199d7fa21fab" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.333331 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj"] Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.368487 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:24 crc kubenswrapper[4849]: E1209 11:29:24.368974 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:24.86895829 +0000 UTC m=+147.408842596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.369786 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm"] Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.466185 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7"] Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.471461 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:24 crc kubenswrapper[4849]: E1209 11:29:24.473248 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:24.973235032 +0000 UTC m=+147.513119348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.503934 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-74c2r"] Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.572926 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:24 crc kubenswrapper[4849]: E1209 11:29:24.573178 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:25.073164355 +0000 UTC m=+147.613048671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.574998 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" podStartSLOduration=124.574987711 podStartE2EDuration="2m4.574987711s" podCreationTimestamp="2025-12-09 11:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:24.572530809 +0000 UTC m=+147.112415125" watchObservedRunningTime="2025-12-09 11:29:24.574987711 +0000 UTC m=+147.114872027" Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.678297 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:24 crc kubenswrapper[4849]: E1209 11:29:24.678777 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:25.178766431 +0000 UTC m=+147.718650747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.698132 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzkn8" podStartSLOduration=127.698114134 podStartE2EDuration="2m7.698114134s" podCreationTimestamp="2025-12-09 11:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:24.698031272 +0000 UTC m=+147.237915588" watchObservedRunningTime="2025-12-09 11:29:24.698114134 +0000 UTC m=+147.237998450" Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.716330 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gr4ld"] Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.779111 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:24 crc kubenswrapper[4849]: E1209 11:29:24.779464 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:25.279445153 +0000 UTC m=+147.819329469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:24 crc kubenswrapper[4849]: W1209 11:29:24.779554 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb29ce09_9dfc_44aa_9eec_3a431d33e0e6.slice/crio-e8d05e7fe8666021a4440f3c1cafc379d41b178cb4b6d9236d9cdd3ea37b33fa WatchSource:0}: Error finding container e8d05e7fe8666021a4440f3c1cafc379d41b178cb4b6d9236d9cdd3ea37b33fa: Status 404 returned error can't find the container with id e8d05e7fe8666021a4440f3c1cafc379d41b178cb4b6d9236d9cdd3ea37b33fa Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.860085 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8"] Dec 09 11:29:24 crc kubenswrapper[4849]: I1209 11:29:24.880982 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:24 crc kubenswrapper[4849]: E1209 11:29:24.881454 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:25.381394876 +0000 UTC m=+147.921279192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.002013 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:25 crc kubenswrapper[4849]: E1209 11:29:25.002368 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:25.502350283 +0000 UTC m=+148.042234599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.019831 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-zqkl8" podStartSLOduration=126.019815808 podStartE2EDuration="2m6.019815808s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:24.992966925 +0000 UTC m=+147.532851241" watchObservedRunningTime="2025-12-09 11:29:25.019815808 +0000 UTC m=+147.559700124" Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.023575 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z4fst"] Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.023611 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-njgnq"] Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.103633 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:25 crc kubenswrapper[4849]: E1209 11:29:25.104052 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:25.60403762 +0000 UTC m=+148.143921936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.204634 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:25 crc kubenswrapper[4849]: E1209 11:29:25.204967 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:25.704948727 +0000 UTC m=+148.244833043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.307038 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:25 crc kubenswrapper[4849]: E1209 11:29:25.308150 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:25.808126902 +0000 UTC m=+148.348011218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.392334 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm"] Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.407309 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" podStartSLOduration=127.407290544 podStartE2EDuration="2m7.407290544s" podCreationTimestamp="2025-12-09 11:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:25.403754945 +0000 UTC m=+147.943639281" watchObservedRunningTime="2025-12-09 11:29:25.407290544 +0000 UTC m=+147.947174860" Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.410117 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:25 crc kubenswrapper[4849]: E1209 11:29:25.411557 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:25.911540863 +0000 UTC m=+148.451425179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.456630 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-74c2r" event={"ID":"8948a613-56f3-4a89-adb7-2c4a2262f2ee","Type":"ContainerStarted","Data":"58032c56a84df5d249cc318069313261994ebbb881a7422558f24f0e54ff48a7"} Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.497970 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" podStartSLOduration=127.497953151 podStartE2EDuration="2m7.497953151s" podCreationTimestamp="2025-12-09 11:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:25.496213287 +0000 UTC m=+148.036097603" watchObservedRunningTime="2025-12-09 11:29:25.497953151 +0000 UTC m=+148.037837467" Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.514899 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:25 crc kubenswrapper[4849]: E1209 11:29:25.515298 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:26.015284282 +0000 UTC m=+148.555168598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.524444 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" event={"ID":"96bbdc9d-911c-4916-a775-6ad2f827f831","Type":"ContainerStarted","Data":"f2916260b9064727202acdd22d5062e60120e23d4c40f9fcaedb0268b738c555"} Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.580821 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" event={"ID":"adfc03a3-e122-4ebf-b69c-6fdc39087856","Type":"ContainerStarted","Data":"8b1fc83487997fe287af898af36be023f7a649206cc6dae9c7aa5d947c3568d5"} Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.608967 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8" event={"ID":"b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f","Type":"ContainerStarted","Data":"b62297f10b73be33ea46e6725f959673acd2321a7937bb43ee56dedfa39a348e"} Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.616214 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:25 crc kubenswrapper[4849]: E1209 11:29:25.617764 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:26.117744609 +0000 UTC m=+148.657628925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.642816 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l6kz7" event={"ID":"1e6507b4-4ff1-4fc1-afee-9e6c2e909908","Type":"ContainerStarted","Data":"2c9187625b602248c950ecc24956860f43f7dbfd97b1d626f5118b74e918c273"} Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.661156 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.661401 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.669555 4849 patch_prober.go:28] interesting pod/apiserver-76f77b778f-jlw2t container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.669619 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" podUID="adfc03a3-e122-4ebf-b69c-6fdc39087856" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.736030 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:25 crc kubenswrapper[4849]: E1209 11:29:25.738085 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:26.238051679 +0000 UTC m=+148.777935995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.741179 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kj9rl" event={"ID":"1609f508-67f3-4209-b9f3-e2195456befe","Type":"ContainerStarted","Data":"8f963314636d4cc15e83fbfcc3efa092851d93611abe0f8c1d9eb4715ea87ed6"} Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.765160 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" event={"ID":"e8b185e5-e51b-4945-baca-221a382c0714","Type":"ContainerStarted","Data":"fc453f501ace9ee4987e841275a03087c19fda902d04dd6700f2effb16b82b74"} Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.769847 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" event={"ID":"db29ce09-9dfc-44aa-9eec-3a431d33e0e6","Type":"ContainerStarted","Data":"e8d05e7fe8666021a4440f3c1cafc379d41b178cb4b6d9236d9cdd3ea37b33fa"} Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.779220 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-njgnq" event={"ID":"a477726d-aae1-47d9-8a3a-70316f991c29","Type":"ContainerStarted","Data":"b73e95a8840dfa664cded91b6bc9ea519f77e16c5a7fb70e168129f444a9c5d3"} Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.804336 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-flzss" event={"ID":"c3d88dfe-fa31-4759-baa6-6c847eb53020","Type":"ContainerStarted","Data":"595d660766e23c7cdc7e1621bad54afe1b26fa2339fb258b4bfac5da60291244"} Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.806139 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj" event={"ID":"00d38b26-84b0-4b2e-92d1-d6c0f63f729d","Type":"ContainerStarted","Data":"16d439ba6acd58adfaa17b5c1fb344343a1b15d2f1d71b350f4492e300abc062"} Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.830015 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.830440 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.841831 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:25 crc kubenswrapper[4849]: E1209 11:29:25.842647 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:26.342626459 +0000 UTC m=+148.882510775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.928351 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-l6kz7" podStartSLOduration=126.92833073 podStartE2EDuration="2m6.92833073s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:25.68697993 +0000 UTC m=+148.226864246" watchObservedRunningTime="2025-12-09 11:29:25.92833073 +0000 UTC m=+148.468215046" Dec 09 11:29:25 crc kubenswrapper[4849]: I1209 11:29:25.945129 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:25 crc kubenswrapper[4849]: E1209 11:29:25.945474 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:26.445461936 +0000 UTC m=+148.985346252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.099748 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:26 crc kubenswrapper[4849]: E1209 11:29:26.100010 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:26.599994617 +0000 UTC m=+149.139878933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.187534 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nshxd"] Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.206541 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:26 crc kubenswrapper[4849]: E1209 11:29:26.206907 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:26.706887807 +0000 UTC m=+149.246772193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.307137 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:26 crc kubenswrapper[4849]: E1209 11:29:26.307348 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:26.807329591 +0000 UTC m=+149.347213907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.307590 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:26 crc kubenswrapper[4849]: E1209 11:29:26.307831 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:26.807823584 +0000 UTC m=+149.347707900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.308359 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8wwr8"] Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.374483 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n726k"] Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.409499 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:26 crc kubenswrapper[4849]: E1209 11:29:26.409860 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:26.90984265 +0000 UTC m=+149.449726976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.510972 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:26 crc kubenswrapper[4849]: E1209 11:29:26.511337 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:27.011320891 +0000 UTC m=+149.551205207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.671961 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.672153 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.672176 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.672234 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.672259 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:29:26 crc kubenswrapper[4849]: E1209 11:29:26.702729 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:27.20269941 +0000 UTC m=+149.742583726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.707242 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.709539 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.743145 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.755014 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.784008 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:26 crc kubenswrapper[4849]: E1209 11:29:26.784333 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:27.284311156 +0000 UTC m=+149.824195482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.860533 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l6kz7" event={"ID":"1e6507b4-4ff1-4fc1-afee-9e6c2e909908","Type":"ContainerStarted","Data":"0fff653700a130e88adc6781b72a46075a34790483953906c68440a2d86f0e7d"} Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.868165 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.873283 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-flzss" event={"ID":"c3d88dfe-fa31-4759-baa6-6c847eb53020","Type":"ContainerStarted","Data":"aaa4a74c29245692714207f134868710d79934175eeda7cb6a5223d4b9fc0e5d"} Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.885490 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:26 crc kubenswrapper[4849]: E1209 11:29:26.885877 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:27.38585822 +0000 UTC m=+149.925742546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.888560 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj" event={"ID":"00d38b26-84b0-4b2e-92d1-d6c0f63f729d","Type":"ContainerStarted","Data":"3c12abaa4883ccec0113f1e008fa96d121990f1dd2879dc6823c6b6aa722fde3"} Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.898808 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.904763 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" event={"ID":"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d","Type":"ContainerStarted","Data":"08d60ec0e829810d1ae9e5a3cb8e6d0a7c4620140480fb63e44c5c9b28dd29cf"} Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.907032 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.918596 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" event={"ID":"a4f1b9f2-8afc-4817-a97b-4788f99674fe","Type":"ContainerStarted","Data":"a3241c29c392e7188eaee6b56a9214edfb904a875494ddde144d49528183cf97"} Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.925285 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z4fst" event={"ID":"a9eab53b-c723-460f-b55e-88b441b25a76","Type":"ContainerStarted","Data":"0a54e99e31673992a27b55cc7e5574768131a155344919626842d50c5b44133a"} Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.927102 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xk4zj" podStartSLOduration=127.927091078 podStartE2EDuration="2m7.927091078s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:26.925008655 +0000 UTC m=+149.464892971" watchObservedRunningTime="2025-12-09 11:29:26.927091078 +0000 UTC m=+149.466975394" Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.927742 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-flzss" podStartSLOduration=127.927738224 podStartE2EDuration="2m7.927738224s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:26.904665278 +0000 UTC m=+149.444549594" watchObservedRunningTime="2025-12-09 11:29:26.927738224 +0000 UTC m=+149.467622540" Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.947265 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl"] Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.954002 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm" event={"ID":"13016872-91b5-446f-a10a-93e366928c47","Type":"ContainerStarted","Data":"420e4db30aeb205ba1d641288cdba2b684680ef08032f69784965826a4ad14e4"} Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.976677 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz"] Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.982103 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" event={"ID":"db29ce09-9dfc-44aa-9eec-3a431d33e0e6","Type":"ContainerStarted","Data":"00f3d32a0b1f1fc21e2f73278636c4df13e065329de2435194f9a8f126ab257b"} Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.987102 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n726k" event={"ID":"f1b338b8-701c-4c0e-87ba-f830190df7eb","Type":"ContainerStarted","Data":"549484734d97b94c0d81965c0453a7548f0236290e99cda07c091372c65ebe35"} Dec 09 11:29:26 crc kubenswrapper[4849]: I1209 11:29:26.989733 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:27 crc kubenswrapper[4849]: E1209 11:29:26.994745 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:27.494727428 +0000 UTC m=+150.034611844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.094562 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:27 crc kubenswrapper[4849]: E1209 11:29:27.097443 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:27.597398601 +0000 UTC m=+150.137282917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.220997 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:27 crc kubenswrapper[4849]: E1209 11:29:27.221600 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:27.72158722 +0000 UTC m=+150.261471536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.345716 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:27 crc kubenswrapper[4849]: E1209 11:29:27.346251 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:27.846235581 +0000 UTC m=+150.386119897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.456977 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:27 crc kubenswrapper[4849]: E1209 11:29:27.457428 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:27.95741518 +0000 UTC m=+150.497299496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.557910 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.558150 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:27 crc kubenswrapper[4849]: E1209 11:29:27.558342 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:28.058324606 +0000 UTC m=+150.598208922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.561840 4849 patch_prober.go:28] interesting pod/router-default-5444994796-flzss container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.561886 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-flzss" podUID="c3d88dfe-fa31-4759-baa6-6c847eb53020" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.611648 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82"] Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.627892 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fvjw7"] Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.656144 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-665jx"] Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.659972 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:27 crc kubenswrapper[4849]: E1209 11:29:27.660322 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:28.160310131 +0000 UTC m=+150.700194457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.669706 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh"] Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.683070 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw"] Dec 09 11:29:27 crc kubenswrapper[4849]: W1209 11:29:27.683499 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e1464d3_a5a6_4fc8_a091_77f41d391939.slice/crio-4c6f49580c598d1be1c11196520091e3e8e372808d2a3a2953dcb24f2a24c978 WatchSource:0}: Error finding container 4c6f49580c598d1be1c11196520091e3e8e372808d2a3a2953dcb24f2a24c978: Status 404 returned error can't find the container with id 4c6f49580c598d1be1c11196520091e3e8e372808d2a3a2953dcb24f2a24c978 Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.699525 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qmrg5"] Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.763327 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:27 crc kubenswrapper[4849]: E1209 11:29:27.767470 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:28.267443956 +0000 UTC m=+150.807328272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.774456 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j"] Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.799886 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-h72pd"] Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.864995 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dvfn7"] Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.872688 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc"] Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.873459 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:27 crc kubenswrapper[4849]: E1209 11:29:27.873769 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:28.373756421 +0000 UTC m=+150.913640737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.875105 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8"] Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.884584 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7"] Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.892767 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9zsgd"] Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.892814 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v"] Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.923841 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7xksm"] Dec 09 11:29:27 crc kubenswrapper[4849]: W1209 11:29:27.963865 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e9eff9a_660a_450b_9c63_c473634e7d0a.slice/crio-e01ba1b12ac19424a22ccbc58fe973e143203d6eb767528b7229fb0235d94466 WatchSource:0}: Error finding container e01ba1b12ac19424a22ccbc58fe973e143203d6eb767528b7229fb0235d94466: Status 404 returned error can't find the container with id e01ba1b12ac19424a22ccbc58fe973e143203d6eb767528b7229fb0235d94466 Dec 09 11:29:27 crc kubenswrapper[4849]: I1209 11:29:27.974718 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:27 crc kubenswrapper[4849]: E1209 11:29:27.975001 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:28.474980425 +0000 UTC m=+151.014864741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.034932 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr"] Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.045770 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9"] Dec 09 11:29:28 crc kubenswrapper[4849]: W1209 11:29:28.058817 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e7c4a38_1f7c_4cb1_b757_8250869e1597.slice/crio-3224b25131510493ca2b62f4a0e5aac70d113ee01c7b37e7a5fd88126840037d WatchSource:0}: Error finding container 3224b25131510493ca2b62f4a0e5aac70d113ee01c7b37e7a5fd88126840037d: Status 404 returned error can't find the container with id 3224b25131510493ca2b62f4a0e5aac70d113ee01c7b37e7a5fd88126840037d Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.059011 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2"] Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.067088 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-74c2r" event={"ID":"8948a613-56f3-4a89-adb7-2c4a2262f2ee","Type":"ContainerStarted","Data":"fdc5364bd4d8aeb5fdbcf7425c8f6aa3b3de22701951e4caabdd85eac8be7f9b"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.067427 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-74c2r" Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.068025 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vlzz2"] Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.080179 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:28 crc kubenswrapper[4849]: E1209 11:29:28.080536 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:28.580508831 +0000 UTC m=+151.120393147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.100795 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.100839 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.105902 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-74c2r" podStartSLOduration=129.105888586 podStartE2EDuration="2m9.105888586s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:28.105609759 +0000 UTC m=+150.645494075" watchObservedRunningTime="2025-12-09 11:29:28.105888586 +0000 UTC m=+150.645772902" Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.112379 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kj9rl" event={"ID":"1609f508-67f3-4209-b9f3-e2195456befe","Type":"ContainerStarted","Data":"8016902aacc9ce9366a3bb33e9029d29e3397b5b2a19001311217f10a32252c6"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.149061 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc" event={"ID":"79459db6-26fd-4be1-a0f6-ac4217a8229c","Type":"ContainerStarted","Data":"45204e02906c09340d7f781dda396c4d69c86a085b8e14363b3c5bae89771cf4"} Dec 09 11:29:28 crc kubenswrapper[4849]: W1209 11:29:28.149302 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f27dee9_7157_455c_84d5_24c51b874b53.slice/crio-ae2bd927d0e12456db384cfe210de5a47ab5f38ce27ceca89545ee222aa901ae WatchSource:0}: Error finding container ae2bd927d0e12456db384cfe210de5a47ab5f38ce27ceca89545ee222aa901ae: Status 404 returned error can't find the container with id ae2bd927d0e12456db384cfe210de5a47ab5f38ce27ceca89545ee222aa901ae Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.150820 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kj9rl" podStartSLOduration=8.150809889 podStartE2EDuration="8.150809889s" podCreationTimestamp="2025-12-09 11:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:28.149781473 +0000 UTC m=+150.689665789" watchObservedRunningTime="2025-12-09 11:29:28.150809889 +0000 UTC m=+150.690694205" Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.181974 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-665jx" event={"ID":"2d553207-9e63-4091-955d-35b3a8625ddb","Type":"ContainerStarted","Data":"696e19e9a97b1bb5b953638177df5647c3a0e4d1c0ba3b5957742a798e340fd8"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.201297 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:28 crc kubenswrapper[4849]: E1209 11:29:28.201985 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:28.701948249 +0000 UTC m=+151.241832565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.202073 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:28 crc kubenswrapper[4849]: E1209 11:29:28.203790 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:28.703777697 +0000 UTC m=+151.243662013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.268821 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" event={"ID":"ecade532-431e-464e-af8f-bdb1fe23ec47","Type":"ContainerStarted","Data":"f1de1acc01af08dfc479f60ceb19e3e499aef0d696f9ef8203d9ddd2f16a9aed"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.320235 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:28 crc kubenswrapper[4849]: E1209 11:29:28.320665 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:28.820645409 +0000 UTC m=+151.360529725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.330524 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h72pd" event={"ID":"3cfa026c-5ae3-47cc-aee8-b06522339617","Type":"ContainerStarted","Data":"2347c8056e1ddbe6771aac0581d70cf10f1a6f6818b9faba1b6ff3ff87655707"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.346872 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" event={"ID":"a4f1b9f2-8afc-4817-a97b-4788f99674fe","Type":"ContainerStarted","Data":"14867587e28eda09d337797e1791dbf3ba2ecafd918c96cf7833a1275829f3f5"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.357891 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" event={"ID":"e7f6648a-8487-415a-bdd3-16a27fea4871","Type":"ContainerStarted","Data":"f9862984f7baddf9a4e9ea49646b700657e864a282b639f5d414b25b5d7f3bd1"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.392960 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nshxd" podStartSLOduration=129.392942699 podStartE2EDuration="2m9.392942699s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:28.392134218 +0000 UTC m=+150.932018554" watchObservedRunningTime="2025-12-09 11:29:28.392942699 +0000 UTC m=+150.932827015" Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.465771 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:28 crc kubenswrapper[4849]: E1209 11:29:28.466088 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:28.966077039 +0000 UTC m=+151.505961355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.484969 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz" event={"ID":"cadfb13c-1ae0-4e22-b3df-fc477f51d4dd","Type":"ContainerStarted","Data":"a3dc073c88465d307825a63cef301817fd233b8e7fab490c8ad0fa63bb74226f"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.502314 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" event={"ID":"8e9eff9a-660a-450b-9c63-c473634e7d0a","Type":"ContainerStarted","Data":"e01ba1b12ac19424a22ccbc58fe973e143203d6eb767528b7229fb0235d94466"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.540593 4849 patch_prober.go:28] interesting pod/router-default-5444994796-flzss container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 11:29:28 crc kubenswrapper[4849]: [-]has-synced failed: reason withheld Dec 09 11:29:28 crc kubenswrapper[4849]: [+]process-running ok Dec 09 11:29:28 crc kubenswrapper[4849]: healthz check failed Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.540654 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-flzss" podUID="c3d88dfe-fa31-4759-baa6-6c847eb53020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.561058 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz" podStartSLOduration=129.560995794 podStartE2EDuration="2m9.560995794s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:28.558857699 +0000 UTC m=+151.098742015" watchObservedRunningTime="2025-12-09 11:29:28.560995794 +0000 UTC m=+151.100880110" Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.568643 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:28 crc kubenswrapper[4849]: E1209 11:29:28.569749 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:29.069730696 +0000 UTC m=+151.609615012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.614955 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.615276 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n726k" event={"ID":"f1b338b8-701c-4c0e-87ba-f830190df7eb","Type":"ContainerStarted","Data":"0c5e04548a3f65a2d9968e087d871d718289e67f6d4b6e56f2bb75dc638899b2"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.615299 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" event={"ID":"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d","Type":"ContainerStarted","Data":"6a50e39aee4012c24767591c36a9d5f83c813d954642f3d192a6f30c0d3f5aca"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.636214 4849 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-8wwr8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.636272 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" podUID="29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.641904 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" event={"ID":"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4","Type":"ContainerStarted","Data":"eb3fbee0cf2f43ace0f17be9feed12cdcedd28772e99f27fcb79b3dd69811115"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.672247 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:28 crc kubenswrapper[4849]: E1209 11:29:28.673781 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:29.173763633 +0000 UTC m=+151.713647949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.777933 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm" event={"ID":"13016872-91b5-446f-a10a-93e366928c47","Type":"ContainerStarted","Data":"a32169148488259652bda3f33c578667a512e75ab79b860a6acb6096c3671b8a"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.777997 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm" event={"ID":"13016872-91b5-446f-a10a-93e366928c47","Type":"ContainerStarted","Data":"428a26ed9c3281a092e5c088fcd1d1a6f14c7f402606ffc04db93a4e28bc8cbb"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.779577 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:28 crc kubenswrapper[4849]: E1209 11:29:28.781106 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:29.281077213 +0000 UTC m=+151.820961609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.787770 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm" Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.840644 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-njgnq" event={"ID":"a477726d-aae1-47d9-8a3a-70316f991c29","Type":"ContainerStarted","Data":"a09a6f806e00ce98cc1c048d86739ec3ee4224fd8604e1bfc5833ab35a228d5c"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.876621 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" event={"ID":"29dd11e8-40b1-485b-a1c7-4df44220d7b0","Type":"ContainerStarted","Data":"c4e50d4c2338f86a4ac703ff8bcc250faebfe1ada60d52514b61c7381e2115f8"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.877445 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.883932 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:28 crc kubenswrapper[4849]: E1209 11:29:28.885247 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:29.385233342 +0000 UTC m=+151.925117658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.887829 4849 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-ddv82 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.887871 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" podUID="29dd11e8-40b1-485b-a1c7-4df44220d7b0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.917248 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z4fst" event={"ID":"a9eab53b-c723-460f-b55e-88b441b25a76","Type":"ContainerStarted","Data":"494608faba32f00211d48834cd6bd0213c94b19854663c3655f5f296a24de11c"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.917301 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z4fst" event={"ID":"a9eab53b-c723-460f-b55e-88b441b25a76","Type":"ContainerStarted","Data":"f91d5a6840d41f15fb8671557813eb5de8b9c4842f9bee1c69e4eb237b668183"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.957443 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8" event={"ID":"b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f","Type":"ContainerStarted","Data":"041ac6dce88121a81972ed39e5d16b81d2d57d2c08930a7274ad0acc6399db35"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.957487 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8" event={"ID":"b5dfa241-23dd-4bb1-b068-0d0cb0dc9b2f","Type":"ContainerStarted","Data":"d878152742ceb972f6e89cd743c8e0086639f5061aeb50e0abb4614d4e0c4f15"} Dec 09 11:29:28 crc kubenswrapper[4849]: I1209 11:29:28.991176 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:28 crc kubenswrapper[4849]: E1209 11:29:28.992583 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:29.492566453 +0000 UTC m=+152.032450759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.000477 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fvjw7" event={"ID":"4e1464d3-a5a6-4fc8-a091-77f41d391939","Type":"ContainerStarted","Data":"4c6f49580c598d1be1c11196520091e3e8e372808d2a3a2953dcb24f2a24c978"} Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.112673 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:29 crc kubenswrapper[4849]: E1209 11:29:29.114326 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:29.61431283 +0000 UTC m=+152.154197146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.126333 4849 generic.go:334] "Generic (PLEG): container finished" podID="db29ce09-9dfc-44aa-9eec-3a431d33e0e6" containerID="00f3d32a0b1f1fc21e2f73278636c4df13e065329de2435194f9a8f126ab257b" exitCode=0 Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.126682 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" event={"ID":"db29ce09-9dfc-44aa-9eec-3a431d33e0e6","Type":"ContainerDied","Data":"00f3d32a0b1f1fc21e2f73278636c4df13e065329de2435194f9a8f126ab257b"} Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.216939 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:29 crc kubenswrapper[4849]: E1209 11:29:29.217354 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:29.717335842 +0000 UTC m=+152.257220148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.347936 4849 generic.go:334] "Generic (PLEG): container finished" podID="e8b185e5-e51b-4945-baca-221a382c0714" containerID="6c269dc0ef2f896d29a9b766415b2450749197c02a32aff91a6eb160a59380c4" exitCode=0 Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.348026 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" event={"ID":"e8b185e5-e51b-4945-baca-221a382c0714","Type":"ContainerDied","Data":"6c269dc0ef2f896d29a9b766415b2450749197c02a32aff91a6eb160a59380c4"} Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.350172 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:29 crc kubenswrapper[4849]: E1209 11:29:29.351750 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:29.851736651 +0000 UTC m=+152.391620967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.410443 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8" event={"ID":"a72c5f55-1631-4f1a-8eb8-0c01edbdea67","Type":"ContainerStarted","Data":"91fa04a16d52e59ca2ffa48251ff59220535cb13a54b1dc9b4ea2f52dc6dba24"} Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.424809 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw" event={"ID":"18b4e4e9-c586-4aad-a2d0-220cc1bc9f43","Type":"ContainerStarted","Data":"bdb66d3ad46d6e4fbac6cc4b247a837bb71e18a985ce0dbc394eeb6b6eee7a66"} Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.442559 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" event={"ID":"96bbdc9d-911c-4916-a775-6ad2f827f831","Type":"ContainerStarted","Data":"d5fba12240995c600251416c689ee6c6416894a2d68ea3a8f1cc7575b8869855"} Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.457001 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:29 crc kubenswrapper[4849]: E1209 11:29:29.458193 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:29.958164109 +0000 UTC m=+152.498048425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.541891 4849 patch_prober.go:28] interesting pod/router-default-5444994796-flzss container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 11:29:29 crc kubenswrapper[4849]: [-]has-synced failed: reason withheld Dec 09 11:29:29 crc kubenswrapper[4849]: [+]process-running ok Dec 09 11:29:29 crc kubenswrapper[4849]: healthz check failed Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.541950 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-flzss" podUID="c3d88dfe-fa31-4759-baa6-6c847eb53020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.558956 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:29 crc kubenswrapper[4849]: E1209 11:29:29.559210 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:30.059195119 +0000 UTC m=+152.599079435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.593676 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-n726k" podStartSLOduration=9.593619364 podStartE2EDuration="9.593619364s" podCreationTimestamp="2025-12-09 11:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:29.590061794 +0000 UTC m=+152.129946110" watchObservedRunningTime="2025-12-09 11:29:29.593619364 +0000 UTC m=+152.133503710" Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.665107 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:29 crc kubenswrapper[4849]: E1209 11:29:29.668063 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:30.168041678 +0000 UTC m=+152.707925994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.770852 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:29 crc kubenswrapper[4849]: E1209 11:29:29.771187 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:30.271175421 +0000 UTC m=+152.811059737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.874526 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:29 crc kubenswrapper[4849]: E1209 11:29:29.877029 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:30.377010934 +0000 UTC m=+152.916895250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:29 crc kubenswrapper[4849]: I1209 11:29:29.978151 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:29 crc kubenswrapper[4849]: E1209 11:29:29.978796 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:30.478784063 +0000 UTC m=+153.018668369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.079434 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:30 crc kubenswrapper[4849]: E1209 11:29:30.080118 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:30.58009894 +0000 UTC m=+153.119983256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.180660 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:30 crc kubenswrapper[4849]: E1209 11:29:30.180960 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:30.680947856 +0000 UTC m=+153.220832172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.267992 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-njgnq" podStartSLOduration=131.267972639 podStartE2EDuration="2m11.267972639s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:30.266709427 +0000 UTC m=+152.806593753" watchObservedRunningTime="2025-12-09 11:29:30.267972639 +0000 UTC m=+152.807856955" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.269821 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" podStartSLOduration=131.269809966 podStartE2EDuration="2m11.269809966s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:30.188774644 +0000 UTC m=+152.728658960" watchObservedRunningTime="2025-12-09 11:29:30.269809966 +0000 UTC m=+152.809694282" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.303931 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:30 crc kubenswrapper[4849]: E1209 11:29:30.304111 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:30.804084828 +0000 UTC m=+153.343969144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.304231 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:30 crc kubenswrapper[4849]: E1209 11:29:30.304656 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:30.804641992 +0000 UTC m=+153.344526308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.344802 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm" podStartSLOduration=130.344784294 podStartE2EDuration="2m10.344784294s" podCreationTimestamp="2025-12-09 11:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:30.344104466 +0000 UTC m=+152.883988802" watchObservedRunningTime="2025-12-09 11:29:30.344784294 +0000 UTC m=+152.884668620" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.406096 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:30 crc kubenswrapper[4849]: E1209 11:29:30.406510 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:30.906490983 +0000 UTC m=+153.446375299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.441133 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pwnk8" podStartSLOduration=131.441117205 podStartE2EDuration="2m11.441117205s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:30.372330895 +0000 UTC m=+152.912215221" watchObservedRunningTime="2025-12-09 11:29:30.441117205 +0000 UTC m=+152.981001521" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.445235 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gr4ld" podStartSLOduration=132.445222239 podStartE2EDuration="2m12.445222239s" podCreationTimestamp="2025-12-09 11:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:30.443467134 +0000 UTC m=+152.983351460" watchObservedRunningTime="2025-12-09 11:29:30.445222239 +0000 UTC m=+152.985106545" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.512639 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:30 crc kubenswrapper[4849]: E1209 11:29:30.513016 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:31.013004283 +0000 UTC m=+153.552888599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.534612 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h72pd" event={"ID":"3cfa026c-5ae3-47cc-aee8-b06522339617","Type":"ContainerStarted","Data":"f4935417eb8f82828925343cc3ec1052f7ffa1b817dd8f0c9198482dbe0d7e96"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.541852 4849 patch_prober.go:28] interesting pod/router-default-5444994796-flzss container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 11:29:30 crc kubenswrapper[4849]: [-]has-synced failed: reason withheld Dec 09 11:29:30 crc kubenswrapper[4849]: [+]process-running ok Dec 09 11:29:30 crc kubenswrapper[4849]: healthz check failed Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.541901 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-flzss" podUID="c3d88dfe-fa31-4759-baa6-6c847eb53020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.621962 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:30 crc kubenswrapper[4849]: E1209 11:29:30.622355 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:31.122336924 +0000 UTC m=+153.662221240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.636115 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" event={"ID":"ecade532-431e-464e-af8f-bdb1fe23ec47","Type":"ContainerStarted","Data":"a1bd9ce48b9e9c68e0b97a10bd7335fafcbb88b3aff879e7e8a8df5977fd64fd"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.673726 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" event={"ID":"43fc7f6a-abb1-476f-bfb3-15ce82c13f41","Type":"ContainerStarted","Data":"cdf2f8725932c95ab84f27feed7ccd69f3dde0a2e8bcfb239e3554ad9775b039"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.673787 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" event={"ID":"43fc7f6a-abb1-476f-bfb3-15ce82c13f41","Type":"ContainerStarted","Data":"265851a955840eb5b24680a30be672ebf2dd0f3677dd84eb01541320caf7eb95"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.673802 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" event={"ID":"43fc7f6a-abb1-476f-bfb3-15ce82c13f41","Type":"ContainerStarted","Data":"260f04a353beb9b451c756cf06b44aee63992c545ecbc0a67e33e36c81cd0ad2"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.686054 4849 patch_prober.go:28] interesting pod/apiserver-76f77b778f-jlw2t container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 09 11:29:30 crc kubenswrapper[4849]: [+]log ok Dec 09 11:29:30 crc kubenswrapper[4849]: [+]etcd ok Dec 09 11:29:30 crc kubenswrapper[4849]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 09 11:29:30 crc kubenswrapper[4849]: [+]poststarthook/generic-apiserver-start-informers ok Dec 09 11:29:30 crc kubenswrapper[4849]: [+]poststarthook/max-in-flight-filter ok Dec 09 11:29:30 crc kubenswrapper[4849]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 09 11:29:30 crc kubenswrapper[4849]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 09 11:29:30 crc kubenswrapper[4849]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 09 11:29:30 crc kubenswrapper[4849]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 09 11:29:30 crc kubenswrapper[4849]: [+]poststarthook/project.openshift.io-projectcache ok Dec 09 11:29:30 crc kubenswrapper[4849]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 09 11:29:30 crc kubenswrapper[4849]: [+]poststarthook/openshift.io-startinformers ok Dec 09 11:29:30 crc kubenswrapper[4849]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 09 11:29:30 crc kubenswrapper[4849]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 09 11:29:30 crc kubenswrapper[4849]: livez check failed Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.686145 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" podUID="adfc03a3-e122-4ebf-b69c-6fdc39087856" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.686289 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fvjw7" event={"ID":"4e1464d3-a5a6-4fc8-a091-77f41d391939","Type":"ContainerStarted","Data":"1b9e8b340a8e449a95e46c0060ea88ac27c02b1607b0da248d35a28cef4c20ea"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.695706 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw" event={"ID":"18b4e4e9-c586-4aad-a2d0-220cc1bc9f43","Type":"ContainerStarted","Data":"f2a7608293996f5477152f06bf6ff3a2bbe882c29417599cb47b6d92cee2348b"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.701951 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4d9tw" podStartSLOduration=131.701919019 podStartE2EDuration="2m11.701919019s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:30.700545144 +0000 UTC m=+153.240429460" watchObservedRunningTime="2025-12-09 11:29:30.701919019 +0000 UTC m=+153.241803355" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.702087 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" podStartSLOduration=130.702083343 podStartE2EDuration="2m10.702083343s" podCreationTimestamp="2025-12-09 11:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:30.482462096 +0000 UTC m=+153.022346432" watchObservedRunningTime="2025-12-09 11:29:30.702083343 +0000 UTC m=+153.241967659" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.728274 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e71429a8446844dde3799a40cf16bfb464f71da7ffaffef4ee81ef50c3341c99"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.728663 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"02430d00f393001c33e8104dd868163a47f104c0414cd137f10539744e5075cb"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.729945 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:30 crc kubenswrapper[4849]: E1209 11:29:30.732784 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:31.232767484 +0000 UTC m=+153.772651860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.769813 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.774613 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" event={"ID":"1f27dee9-7157-455c-84d5-24c51b874b53","Type":"ContainerStarted","Data":"ae2bd927d0e12456db384cfe210de5a47ab5f38ce27ceca89545ee222aa901ae"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.776003 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" event={"ID":"e7f6648a-8487-415a-bdd3-16a27fea4871","Type":"ContainerStarted","Data":"cfb7611dc31538f21488e14b7871e227ce5c4ab43b156184d41c2996d395d7f8"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.776030 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" event={"ID":"e7f6648a-8487-415a-bdd3-16a27fea4871","Type":"ContainerStarted","Data":"42c0b5d1f8f17c6ed32cb5496b18c11b46e1a2fae7005f55972c709ab5086da6"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.777642 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dxvqz" event={"ID":"cadfb13c-1ae0-4e22-b3df-fc477f51d4dd","Type":"ContainerStarted","Data":"ee28977f385d619af39212f2434cf1ea0a1dff83ba0c222765efd88ed60106c2"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.793025 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" event={"ID":"8e9eff9a-660a-450b-9c63-c473634e7d0a","Type":"ContainerStarted","Data":"2546e724fda2a396640e78ad94cd0ea55a32a8b524b627eaf64db6dc13ca49cb"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.834313 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-9zsgd" event={"ID":"2314f111-b042-40c3-832c-1c0d49c5e088","Type":"ContainerStarted","Data":"d49f0a9020e47d0d152d6d3d978d9a00fd50a778c86e1eaafa8434e16efdaac2"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.834369 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-9zsgd" event={"ID":"2314f111-b042-40c3-832c-1c0d49c5e088","Type":"ContainerStarted","Data":"2227bdd29aae844bc4cc6e028e840139634cb7d231574e0d5766fe9549e0a29f"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.835433 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.835995 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:30 crc kubenswrapper[4849]: E1209 11:29:30.837866 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:31.337831846 +0000 UTC m=+153.877716162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.857999 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dvfn7" event={"ID":"4a0fccbe-ade0-4666-8758-d67f3c74e8e7","Type":"ContainerStarted","Data":"3ae172dda7a703ac052d2220e71351c24683e4fd522d4c1f6be54d7e0cc4a6e5"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.858048 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dvfn7" event={"ID":"4a0fccbe-ade0-4666-8758-d67f3c74e8e7","Type":"ContainerStarted","Data":"a5cd10631fe47d7b305e691f257926f8bfca1b243c778d34ab8f954144019f19"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.859640 4849 patch_prober.go:28] interesting pod/console-operator-58897d9998-9zsgd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/readyz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.859704 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-9zsgd" podUID="2314f111-b042-40c3-832c-1c0d49c5e088" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/readyz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.894963 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" event={"ID":"4ef19983-4775-4438-83a8-f8279c96959c","Type":"ContainerStarted","Data":"92facd81e9073456f4f4be62a48f83c0d8a72801df3b9d6f3e46513a2bb57a36"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.895129 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" event={"ID":"4ef19983-4775-4438-83a8-f8279c96959c","Type":"ContainerStarted","Data":"25ff691e7525eace94758341d460c3fd30ad99ce492d8cb537d52283ffd6b676"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.896380 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.897473 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" event={"ID":"29dd11e8-40b1-485b-a1c7-4df44220d7b0","Type":"ContainerStarted","Data":"235ed8eacaa78e359b0845fd46c586866e44ae92805e7c0825081a4230b8cb7b"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.899671 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc" event={"ID":"79459db6-26fd-4be1-a0f6-ac4217a8229c","Type":"ContainerStarted","Data":"3cca3ec601601523907bb75ccf9ad29197ba59b74231e7bfc8bbe456df15b6b8"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.918270 4849 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-szrq9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.918321 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" podUID="4ef19983-4775-4438-83a8-f8279c96959c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.920178 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8" event={"ID":"a72c5f55-1631-4f1a-8eb8-0c01edbdea67","Type":"ContainerStarted","Data":"56b627a017d1f0f452ac6b31b514044cc5805748f89cee54b4376cdcd1bab9d9"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.921506 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-665jx" event={"ID":"2d553207-9e63-4091-955d-35b3a8625ddb","Type":"ContainerStarted","Data":"2e14bcae8ff66bc6f7d709a0bbd53fc9cb74c6692e9b423fc70e8a36f531594b"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.938753 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:30 crc kubenswrapper[4849]: E1209 11:29:30.941733 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:31.441721929 +0000 UTC m=+153.981606245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.944172 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ddv82" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.948698 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vlzz2" event={"ID":"c61bb947-7202-4145-99f8-5060296a1dc9","Type":"ContainerStarted","Data":"df23d458d606dd911d41e2940a8eb5da086b68cea34c4eb6dea64a9863fe1c75"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.948986 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vlzz2" event={"ID":"c61bb947-7202-4145-99f8-5060296a1dc9","Type":"ContainerStarted","Data":"0ba2560c6ca968390ae6614ab74e0de4feb7b063a4b2c23a97ef1645000ad4ea"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.953236 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4a9a8ff350baf50d82e19ace17aec506a17b997abf4f2654be44dfb27981781d"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.953268 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"fc61ae07218edbe6d3366b5b955765c8c84d212646f83e8a960a657db7ab6366"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.953882 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.961894 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f8e7cb270fad94bef9a497c7104708eaf7dfa79c279a3f8a0b3fcc54daff8df5"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.961949 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"06a8534a9a18f1ea704d2b1f628612354a4f82c6ab4f1f850c6eb06471b29123"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.972637 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" event={"ID":"3e7c4a38-1f7c-4cb1-b757-8250869e1597","Type":"ContainerStarted","Data":"5846adfb8ea863f5f0091497887ec3d3c7e646604df8e22c60d5ef8cee398f05"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.972684 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" event={"ID":"3e7c4a38-1f7c-4cb1-b757-8250869e1597","Type":"ContainerStarted","Data":"3224b25131510493ca2b62f4a0e5aac70d113ee01c7b37e7a5fd88126840037d"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.973518 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.974292 4849 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xksm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.974342 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" podUID="3e7c4a38-1f7c-4cb1-b757-8250869e1597" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.975775 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-njgnq" event={"ID":"a477726d-aae1-47d9-8a3a-70316f991c29","Type":"ContainerStarted","Data":"c9ea92ddbb6291aedcfd6f382a8c36e5492e319fe95f05349353578e6a4529e5"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.977302 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v" event={"ID":"2ab0b2db-189a-44e6-a904-49c45fca1a3e","Type":"ContainerStarted","Data":"25293dc542c1ce60400fe13047ce417283ce758fa97b7e38d974ae94e8274a81"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.977330 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v" event={"ID":"2ab0b2db-189a-44e6-a904-49c45fca1a3e","Type":"ContainerStarted","Data":"ade5112b71b4c97cab9321cbf4a58a0ac83cc3f1a0ef91273f39a7d6b6ba88fe"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.991108 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2" event={"ID":"dfbc3e4e-babb-4119-9343-68c87540802e","Type":"ContainerStarted","Data":"ba22101df94ab2c9f37b1a0535d1d55dc94e053d387c379762c05db353716676"} Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.993634 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:29:30 crc kubenswrapper[4849]: I1209 11:29:30.993677 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.045435 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:31 crc kubenswrapper[4849]: E1209 11:29:31.047292 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:31.547274525 +0000 UTC m=+154.087158841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.061292 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.139066 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z4fst" podStartSLOduration=132.139045349 podStartE2EDuration="2m12.139045349s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:31.138616439 +0000 UTC m=+153.678500755" watchObservedRunningTime="2025-12-09 11:29:31.139045349 +0000 UTC m=+153.678929665" Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.139374 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-fvjw7" podStartSLOduration=131.139367887 podStartE2EDuration="2m11.139367887s" podCreationTimestamp="2025-12-09 11:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:31.066322649 +0000 UTC m=+153.606206975" watchObservedRunningTime="2025-12-09 11:29:31.139367887 +0000 UTC m=+153.679252203" Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.147491 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:31 crc kubenswrapper[4849]: E1209 11:29:31.147911 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:31.647895965 +0000 UTC m=+154.187780281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.249234 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:31 crc kubenswrapper[4849]: E1209 11:29:31.249610 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:31.749594602 +0000 UTC m=+154.289478918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.455349 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:31 crc kubenswrapper[4849]: E1209 11:29:31.455729 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:31.955715025 +0000 UTC m=+154.495599341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.457952 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5vdt8" podStartSLOduration=131.457935792 podStartE2EDuration="2m11.457935792s" podCreationTimestamp="2025-12-09 11:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:31.45589805 +0000 UTC m=+153.995782366" watchObservedRunningTime="2025-12-09 11:29:31.457935792 +0000 UTC m=+153.997820108" Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.479954 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xtskh" podStartSLOduration=132.479925482 podStartE2EDuration="2m12.479925482s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:31.478010602 +0000 UTC m=+154.017894948" watchObservedRunningTime="2025-12-09 11:29:31.479925482 +0000 UTC m=+154.019809798" Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.534607 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8pb5v" podStartSLOduration=132.534592492 podStartE2EDuration="2m12.534592492s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:31.532203941 +0000 UTC m=+154.072088257" watchObservedRunningTime="2025-12-09 11:29:31.534592492 +0000 UTC m=+154.074476808" Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.561161 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:31 crc kubenswrapper[4849]: E1209 11:29:31.561296 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:32.061276901 +0000 UTC m=+154.601161207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.561381 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:31 crc kubenswrapper[4849]: E1209 11:29:31.561780 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:32.061766943 +0000 UTC m=+154.601651259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.580573 4849 patch_prober.go:28] interesting pod/router-default-5444994796-flzss container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 11:29:31 crc kubenswrapper[4849]: [-]has-synced failed: reason withheld Dec 09 11:29:31 crc kubenswrapper[4849]: [+]process-running ok Dec 09 11:29:31 crc kubenswrapper[4849]: healthz check failed Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.580623 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-flzss" podUID="c3d88dfe-fa31-4759-baa6-6c847eb53020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.641819 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2" podStartSLOduration=132.641801649 podStartE2EDuration="2m12.641801649s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:31.592532336 +0000 UTC m=+154.132416652" watchObservedRunningTime="2025-12-09 11:29:31.641801649 +0000 UTC m=+154.181685965" Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.661979 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:31 crc kubenswrapper[4849]: E1209 11:29:31.662160 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:32.162135267 +0000 UTC m=+154.702019583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.679159 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-9zsgd" podStartSLOduration=132.679141759 podStartE2EDuration="2m12.679141759s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:31.645564855 +0000 UTC m=+154.185449171" watchObservedRunningTime="2025-12-09 11:29:31.679141759 +0000 UTC m=+154.219026075" Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.712383 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dvfn7" podStartSLOduration=132.712362564 podStartE2EDuration="2m12.712362564s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:31.709921212 +0000 UTC m=+154.249805528" watchObservedRunningTime="2025-12-09 11:29:31.712362564 +0000 UTC m=+154.252246880" Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.764816 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:31 crc kubenswrapper[4849]: E1209 11:29:31.765175 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:32.265160227 +0000 UTC m=+154.805044543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.837249 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" podStartSLOduration=132.837231671 podStartE2EDuration="2m12.837231671s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:31.83679356 +0000 UTC m=+154.376677886" watchObservedRunningTime="2025-12-09 11:29:31.837231671 +0000 UTC m=+154.377115987" Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.837671 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" podStartSLOduration=132.837665232 podStartE2EDuration="2m12.837665232s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:31.793868808 +0000 UTC m=+154.333753124" watchObservedRunningTime="2025-12-09 11:29:31.837665232 +0000 UTC m=+154.377549548" Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.865845 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:31 crc kubenswrapper[4849]: E1209 11:29:31.866196 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:32.366177128 +0000 UTC m=+154.906061444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:31 crc kubenswrapper[4849]: I1209 11:29:31.969686 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:31 crc kubenswrapper[4849]: E1209 11:29:31.970009 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:32.469996279 +0000 UTC m=+155.009880595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.031129 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" event={"ID":"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4","Type":"ContainerStarted","Data":"0a443c1a5d1cdcb47d27b730c20a988f5881bc16ddc7484d5a68392fd128c566"} Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.062909 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-665jx" event={"ID":"2d553207-9e63-4091-955d-35b3a8625ddb","Type":"ContainerStarted","Data":"2d23de1f18a870a8449ea95b32c18966ebc7e2a70bdd1cec6bf7f71baf24f91e"} Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.075053 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.076569 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:32.57655135 +0000 UTC m=+155.116435656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.088711 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h72pd" event={"ID":"3cfa026c-5ae3-47cc-aee8-b06522339617","Type":"ContainerStarted","Data":"82e4ceb8af80abf712b0b8fa38cd573699a6c1c8a9efef1fdee082a1bff6ebd9"} Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.089315 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-h72pd" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.127726 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" event={"ID":"1f27dee9-7157-455c-84d5-24c51b874b53","Type":"ContainerStarted","Data":"c98a10178988f6613e27e2b0838e8f251d42d04dd343a96183c682832dbf07ca"} Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.128610 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.137135 4849 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-85jmr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.137200 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" podUID="1f27dee9-7157-455c-84d5-24c51b874b53" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.139759 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vlzz2" event={"ID":"c61bb947-7202-4145-99f8-5060296a1dc9","Type":"ContainerStarted","Data":"515215aec207abf05e5cf2bb257de86e78c2bdd5a3ac2dfdfc7a07b90c357500"} Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.141141 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" podStartSLOduration=133.141117842 podStartE2EDuration="2m13.141117842s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:32.140998539 +0000 UTC m=+154.680882855" watchObservedRunningTime="2025-12-09 11:29:32.141117842 +0000 UTC m=+154.681002168" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.157515 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" event={"ID":"e8b185e5-e51b-4945-baca-221a382c0714","Type":"ContainerStarted","Data":"8856f699d7b1dae9f8aa8976c9456c0ddc540e33da08b325d6abda8ea31dcb9a"} Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.166067 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8g6j2" event={"ID":"dfbc3e4e-babb-4119-9343-68c87540802e","Type":"ContainerStarted","Data":"f21756017516fd4c240cd84929bda9041351724cd3c11c6847dc3cf4b3f040a8"} Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.174926 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" event={"ID":"db29ce09-9dfc-44aa-9eec-3a431d33e0e6","Type":"ContainerStarted","Data":"377ffaec18416a561f9badea8e744a02019df95b5912a4d713b013357993c933"} Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.176714 4849 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-szrq9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.176757 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" podUID="4ef19983-4775-4438-83a8-f8279c96959c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.178874 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.180569 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.180643 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.180741 4849 patch_prober.go:28] interesting pod/console-operator-58897d9998-9zsgd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/readyz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.180763 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-9zsgd" podUID="2314f111-b042-40c3-832c-1c0d49c5e088" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/readyz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.184173 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:32.684147537 +0000 UTC m=+155.224031923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.192635 4849 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xksm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.192696 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" podUID="3e7c4a38-1f7c-4cb1-b757-8250869e1597" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.198320 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zm9cl" podStartSLOduration=133.198302407 podStartE2EDuration="2m13.198302407s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:32.195350052 +0000 UTC m=+154.735234368" watchObservedRunningTime="2025-12-09 11:29:32.198302407 +0000 UTC m=+154.738186723" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.279920 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.280155 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:32.780123778 +0000 UTC m=+155.320008094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.280798 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.283583 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:32.783563656 +0000 UTC m=+155.323448062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.295582 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" podStartSLOduration=134.295559611 podStartE2EDuration="2m14.295559611s" podCreationTimestamp="2025-12-09 11:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:32.290765989 +0000 UTC m=+154.830650305" watchObservedRunningTime="2025-12-09 11:29:32.295559611 +0000 UTC m=+154.835443927" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.381720 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.381878 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:32.881854176 +0000 UTC m=+155.421738492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.381966 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.382277 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:32.882263946 +0000 UTC m=+155.422148262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.470141 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-grhwc" podStartSLOduration=134.470110781 podStartE2EDuration="2m14.470110781s" podCreationTimestamp="2025-12-09 11:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:32.379342372 +0000 UTC m=+154.919226688" watchObservedRunningTime="2025-12-09 11:29:32.470110781 +0000 UTC m=+155.009995097" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.471000 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lchf7" podStartSLOduration=133.470992773 podStartE2EDuration="2m13.470992773s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:32.464886098 +0000 UTC m=+155.004770414" watchObservedRunningTime="2025-12-09 11:29:32.470992773 +0000 UTC m=+155.010877099" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.483244 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.483511 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:32.983474231 +0000 UTC m=+155.523358547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.483597 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.484017 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:32.984006485 +0000 UTC m=+155.523890841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.510613 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vlzz2" podStartSLOduration=133.510594571 podStartE2EDuration="2m13.510594571s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:32.510498139 +0000 UTC m=+155.050382455" watchObservedRunningTime="2025-12-09 11:29:32.510594571 +0000 UTC m=+155.050478877" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.538468 4849 patch_prober.go:28] interesting pod/router-default-5444994796-flzss container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 11:29:32 crc kubenswrapper[4849]: [-]has-synced failed: reason withheld Dec 09 11:29:32 crc kubenswrapper[4849]: [+]process-running ok Dec 09 11:29:32 crc kubenswrapper[4849]: healthz check failed Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.538535 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-flzss" podUID="c3d88dfe-fa31-4759-baa6-6c847eb53020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.585004 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.585159 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:33.085128947 +0000 UTC m=+155.625013263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.585547 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.585914 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:33.085905297 +0000 UTC m=+155.625789683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.607583 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" podStartSLOduration=132.607557838 podStartE2EDuration="2m12.607557838s" podCreationTimestamp="2025-12-09 11:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:32.588589455 +0000 UTC m=+155.128473781" watchObservedRunningTime="2025-12-09 11:29:32.607557838 +0000 UTC m=+155.147442164" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.678544 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-h72pd" podStartSLOduration=12.678519993 podStartE2EDuration="12.678519993s" podCreationTimestamp="2025-12-09 11:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:32.640751973 +0000 UTC m=+155.180636289" watchObservedRunningTime="2025-12-09 11:29:32.678519993 +0000 UTC m=+155.218404319" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.678977 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-665jx" podStartSLOduration=133.678971435 podStartE2EDuration="2m13.678971435s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:32.676464941 +0000 UTC m=+155.216349267" watchObservedRunningTime="2025-12-09 11:29:32.678971435 +0000 UTC m=+155.218855771" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.687091 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.687306 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:33.187273496 +0000 UTC m=+155.727157822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.687392 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.687772 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:33.187756398 +0000 UTC m=+155.727640774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.744621 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.744673 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.745940 4849 patch_prober.go:28] interesting pod/console-f9d7485db-l6kz7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.746029 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-l6kz7" podUID="1e6507b4-4ff1-4fc1-afee-9e6c2e909908" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.770524 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" podStartSLOduration=132.770479452 podStartE2EDuration="2m12.770479452s" podCreationTimestamp="2025-12-09 11:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:32.770123493 +0000 UTC m=+155.310007819" watchObservedRunningTime="2025-12-09 11:29:32.770479452 +0000 UTC m=+155.310363768" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.788242 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.788430 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:33.288389448 +0000 UTC m=+155.828273764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.789216 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.789526 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:33.289512827 +0000 UTC m=+155.829397213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.847374 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.847433 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.891530 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.891909 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:33.391890861 +0000 UTC m=+155.931775177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.952378 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.952450 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.952730 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.952765 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:29:32 crc kubenswrapper[4849]: I1209 11:29:32.992816 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:32 crc kubenswrapper[4849]: E1209 11:29:32.993089 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:33.493077485 +0000 UTC m=+156.032961801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.113567 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:33 crc kubenswrapper[4849]: E1209 11:29:33.113740 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:33.613716015 +0000 UTC m=+156.153600331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.114226 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:33 crc kubenswrapper[4849]: E1209 11:29:33.114605 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:33.614595037 +0000 UTC m=+156.154479393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.187840 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" event={"ID":"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4","Type":"ContainerStarted","Data":"4cec01f06f51444c2e6c4a8c69bd1a4b3a433f82bbdfbe7071bb58fb31b46bf2"} Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.188574 4849 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xksm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.188629 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" podUID="3e7c4a38-1f7c-4cb1-b757-8250869e1597" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.188928 4849 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-85jmr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.188964 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" podUID="1f27dee9-7157-455c-84d5-24c51b874b53" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.195847 4849 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjfc7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.195906 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" podUID="db29ce09-9dfc-44aa-9eec-3a431d33e0e6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.215756 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:33 crc kubenswrapper[4849]: E1209 11:29:33.219024 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:33.719000313 +0000 UTC m=+156.258884629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.312611 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szrq9" Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.318598 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:33 crc kubenswrapper[4849]: E1209 11:29:33.318979 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:33.818966716 +0000 UTC m=+156.358851032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.419329 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:33 crc kubenswrapper[4849]: E1209 11:29:33.419646 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:33.919632417 +0000 UTC m=+156.459516733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.486658 4849 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xksm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.486704 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" podUID="3e7c4a38-1f7c-4cb1-b757-8250869e1597" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.486898 4849 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xksm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.486915 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" podUID="3e7c4a38-1f7c-4cb1-b757-8250869e1597" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.520618 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:33 crc kubenswrapper[4849]: E1209 11:29:33.520978 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:34.020962794 +0000 UTC m=+156.560847110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.529576 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.535083 4849 patch_prober.go:28] interesting pod/router-default-5444994796-flzss container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 11:29:33 crc kubenswrapper[4849]: [-]has-synced failed: reason withheld Dec 09 11:29:33 crc kubenswrapper[4849]: [+]process-running ok Dec 09 11:29:33 crc kubenswrapper[4849]: healthz check failed Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.535146 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-flzss" podUID="c3d88dfe-fa31-4759-baa6-6c847eb53020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.621742 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:33 crc kubenswrapper[4849]: E1209 11:29:33.621847 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:34.12182701 +0000 UTC m=+156.661711326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.622032 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:33 crc kubenswrapper[4849]: E1209 11:29:33.623107 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:34.123095902 +0000 UTC m=+156.662980268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.723188 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:33 crc kubenswrapper[4849]: E1209 11:29:33.723566 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:34.223550388 +0000 UTC m=+156.763434704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.826150 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:33 crc kubenswrapper[4849]: E1209 11:29:33.826480 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:34.326467497 +0000 UTC m=+156.866351813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.926626 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:33 crc kubenswrapper[4849]: E1209 11:29:33.926988 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:34.426962193 +0000 UTC m=+156.966846519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.927169 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:33 crc kubenswrapper[4849]: E1209 11:29:33.927527 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:34.427510077 +0000 UTC m=+156.967394393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:33 crc kubenswrapper[4849]: I1209 11:29:33.978863 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.028403 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.028556 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:34.528524167 +0000 UTC m=+157.068408483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.028779 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.029097 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:34.529086981 +0000 UTC m=+157.068971367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.130223 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:34.630200364 +0000 UTC m=+157.170084690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.130262 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.130469 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.130811 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:34.630800189 +0000 UTC m=+157.170684505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.191855 4849 patch_prober.go:28] interesting pod/console-operator-58897d9998-9zsgd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.191919 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-9zsgd" podUID="2314f111-b042-40c3-832c-1c0d49c5e088" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.205156 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" event={"ID":"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4","Type":"ContainerStarted","Data":"6d8a812c9832f936592616ab38d592ff808d853665eba9648dcf8e6aaca35f8d"} Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.213228 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k9zjm" Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.279468 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.280684 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:34.780641761 +0000 UTC m=+157.320526077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.281932 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.282195 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:34.78218793 +0000 UTC m=+157.322072236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.384308 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.384664 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:34.884645456 +0000 UTC m=+157.424529772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.486388 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.486786 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:34.986771274 +0000 UTC m=+157.526655590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.533535 4849 patch_prober.go:28] interesting pod/router-default-5444994796-flzss container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 11:29:34 crc kubenswrapper[4849]: [-]has-synced failed: reason withheld Dec 09 11:29:34 crc kubenswrapper[4849]: [+]process-running ok Dec 09 11:29:34 crc kubenswrapper[4849]: healthz check failed Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.533582 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-flzss" podUID="c3d88dfe-fa31-4759-baa6-6c847eb53020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.560258 4849 patch_prober.go:28] interesting pod/console-operator-58897d9998-9zsgd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.560326 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-9zsgd" podUID="2314f111-b042-40c3-832c-1c0d49c5e088" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.588108 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.588490 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:35.088471582 +0000 UTC m=+157.628355898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.588680 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.589019 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:35.089008115 +0000 UTC m=+157.628892431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.689340 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.689975 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:35.189937873 +0000 UTC m=+157.729822179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.790936 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.791329 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:35.291314122 +0000 UTC m=+157.831198438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.857553 4849 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-85jmr container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.857630 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" podUID="1f27dee9-7157-455c-84d5-24c51b874b53" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.857788 4849 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-85jmr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.857854 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" podUID="1f27dee9-7157-455c-84d5-24c51b874b53" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.891489 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.891664 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:35.391633074 +0000 UTC m=+157.931517390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.891837 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.892160 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:35.392142667 +0000 UTC m=+157.932026983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.949635 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-824s2"] Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.950751 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-824s2" Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.961236 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.993906 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.994063 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:35.494032759 +0000 UTC m=+158.033917075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:34 crc kubenswrapper[4849]: I1209 11:29:34.994738 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:34 crc kubenswrapper[4849]: E1209 11:29:34.995084 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:35.495069085 +0000 UTC m=+158.034953411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.032816 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-824s2"] Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.096485 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.096688 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08448cd5-1dba-4274-ab2b-16d4ac6c0746-utilities\") pod \"certified-operators-824s2\" (UID: \"08448cd5-1dba-4274-ab2b-16d4ac6c0746\") " pod="openshift-marketplace/certified-operators-824s2" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.096741 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl42m\" (UniqueName: \"kubernetes.io/projected/08448cd5-1dba-4274-ab2b-16d4ac6c0746-kube-api-access-xl42m\") pod \"certified-operators-824s2\" (UID: \"08448cd5-1dba-4274-ab2b-16d4ac6c0746\") " pod="openshift-marketplace/certified-operators-824s2" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.096799 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08448cd5-1dba-4274-ab2b-16d4ac6c0746-catalog-content\") pod \"certified-operators-824s2\" (UID: \"08448cd5-1dba-4274-ab2b-16d4ac6c0746\") " pod="openshift-marketplace/certified-operators-824s2" Dec 09 11:29:35 crc kubenswrapper[4849]: E1209 11:29:35.096941 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:35.596922777 +0000 UTC m=+158.136807093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.149898 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tlmnz"] Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.151071 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlmnz" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.154688 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.189994 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlmnz"] Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.192323 4849 patch_prober.go:28] interesting pod/console-operator-58897d9998-9zsgd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.192387 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-9zsgd" podUID="2314f111-b042-40c3-832c-1c0d49c5e088" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.198283 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl42m\" (UniqueName: \"kubernetes.io/projected/08448cd5-1dba-4274-ab2b-16d4ac6c0746-kube-api-access-xl42m\") pod \"certified-operators-824s2\" (UID: \"08448cd5-1dba-4274-ab2b-16d4ac6c0746\") " pod="openshift-marketplace/certified-operators-824s2" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.198360 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08448cd5-1dba-4274-ab2b-16d4ac6c0746-catalog-content\") pod \"certified-operators-824s2\" (UID: \"08448cd5-1dba-4274-ab2b-16d4ac6c0746\") " pod="openshift-marketplace/certified-operators-824s2" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.198384 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.198468 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08448cd5-1dba-4274-ab2b-16d4ac6c0746-utilities\") pod \"certified-operators-824s2\" (UID: \"08448cd5-1dba-4274-ab2b-16d4ac6c0746\") " pod="openshift-marketplace/certified-operators-824s2" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.198919 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08448cd5-1dba-4274-ab2b-16d4ac6c0746-utilities\") pod \"certified-operators-824s2\" (UID: \"08448cd5-1dba-4274-ab2b-16d4ac6c0746\") " pod="openshift-marketplace/certified-operators-824s2" Dec 09 11:29:35 crc kubenswrapper[4849]: E1209 11:29:35.198989 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:35.698972923 +0000 UTC m=+158.238857309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.199316 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08448cd5-1dba-4274-ab2b-16d4ac6c0746-catalog-content\") pod \"certified-operators-824s2\" (UID: \"08448cd5-1dba-4274-ab2b-16d4ac6c0746\") " pod="openshift-marketplace/certified-operators-824s2" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.212921 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" event={"ID":"92f2bbfc-50e9-4d11-ac60-28efcfaea5b4","Type":"ContainerStarted","Data":"066cea8450c50230851d3c4f3bea96df684ce05d4a31176f312f0a2ae89a5b9b"} Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.259297 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl42m\" (UniqueName: \"kubernetes.io/projected/08448cd5-1dba-4274-ab2b-16d4ac6c0746-kube-api-access-xl42m\") pod \"certified-operators-824s2\" (UID: \"08448cd5-1dba-4274-ab2b-16d4ac6c0746\") " pod="openshift-marketplace/certified-operators-824s2" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.267685 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-824s2" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.285242 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qmrg5" podStartSLOduration=15.285224087 podStartE2EDuration="15.285224087s" podCreationTimestamp="2025-12-09 11:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:35.284831187 +0000 UTC m=+157.824715523" watchObservedRunningTime="2025-12-09 11:29:35.285224087 +0000 UTC m=+157.825108403" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.305070 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.305338 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kktf\" (UniqueName: \"kubernetes.io/projected/2b3a8f6d-222d-4fee-a997-b30bb399b6be-kube-api-access-5kktf\") pod \"community-operators-tlmnz\" (UID: \"2b3a8f6d-222d-4fee-a997-b30bb399b6be\") " pod="openshift-marketplace/community-operators-tlmnz" Dec 09 11:29:35 crc kubenswrapper[4849]: E1209 11:29:35.307077 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:35.807051492 +0000 UTC m=+158.346935808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.305463 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b3a8f6d-222d-4fee-a997-b30bb399b6be-utilities\") pod \"community-operators-tlmnz\" (UID: \"2b3a8f6d-222d-4fee-a997-b30bb399b6be\") " pod="openshift-marketplace/community-operators-tlmnz" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.312640 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b3a8f6d-222d-4fee-a997-b30bb399b6be-catalog-content\") pod \"community-operators-tlmnz\" (UID: \"2b3a8f6d-222d-4fee-a997-b30bb399b6be\") " pod="openshift-marketplace/community-operators-tlmnz" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.343720 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-md5l9"] Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.352035 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-md5l9" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.394426 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-md5l9"] Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.434170 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b3a8f6d-222d-4fee-a997-b30bb399b6be-utilities\") pod \"community-operators-tlmnz\" (UID: \"2b3a8f6d-222d-4fee-a997-b30bb399b6be\") " pod="openshift-marketplace/community-operators-tlmnz" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.434212 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b3a8f6d-222d-4fee-a997-b30bb399b6be-catalog-content\") pod \"community-operators-tlmnz\" (UID: \"2b3a8f6d-222d-4fee-a997-b30bb399b6be\") " pod="openshift-marketplace/community-operators-tlmnz" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.434249 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kktf\" (UniqueName: \"kubernetes.io/projected/2b3a8f6d-222d-4fee-a997-b30bb399b6be-kube-api-access-5kktf\") pod \"community-operators-tlmnz\" (UID: \"2b3a8f6d-222d-4fee-a997-b30bb399b6be\") " pod="openshift-marketplace/community-operators-tlmnz" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.434288 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:35 crc kubenswrapper[4849]: E1209 11:29:35.434588 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:35.934575536 +0000 UTC m=+158.474459852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.435040 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b3a8f6d-222d-4fee-a997-b30bb399b6be-utilities\") pod \"community-operators-tlmnz\" (UID: \"2b3a8f6d-222d-4fee-a997-b30bb399b6be\") " pod="openshift-marketplace/community-operators-tlmnz" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.435246 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b3a8f6d-222d-4fee-a997-b30bb399b6be-catalog-content\") pod \"community-operators-tlmnz\" (UID: \"2b3a8f6d-222d-4fee-a997-b30bb399b6be\") " pod="openshift-marketplace/community-operators-tlmnz" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.486529 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kktf\" (UniqueName: \"kubernetes.io/projected/2b3a8f6d-222d-4fee-a997-b30bb399b6be-kube-api-access-5kktf\") pod \"community-operators-tlmnz\" (UID: \"2b3a8f6d-222d-4fee-a997-b30bb399b6be\") " pod="openshift-marketplace/community-operators-tlmnz" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.537007 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.537216 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5p75\" (UniqueName: \"kubernetes.io/projected/f19e5981-0356-4c0d-842b-211cfbef65b3-kube-api-access-c5p75\") pod \"certified-operators-md5l9\" (UID: \"f19e5981-0356-4c0d-842b-211cfbef65b3\") " pod="openshift-marketplace/certified-operators-md5l9" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.537260 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19e5981-0356-4c0d-842b-211cfbef65b3-catalog-content\") pod \"certified-operators-md5l9\" (UID: \"f19e5981-0356-4c0d-842b-211cfbef65b3\") " pod="openshift-marketplace/certified-operators-md5l9" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.537305 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19e5981-0356-4c0d-842b-211cfbef65b3-utilities\") pod \"certified-operators-md5l9\" (UID: \"f19e5981-0356-4c0d-842b-211cfbef65b3\") " pod="openshift-marketplace/certified-operators-md5l9" Dec 09 11:29:35 crc kubenswrapper[4849]: E1209 11:29:35.537469 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:36.037452764 +0000 UTC m=+158.577337080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.546713 4849 patch_prober.go:28] interesting pod/router-default-5444994796-flzss container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 11:29:35 crc kubenswrapper[4849]: [-]has-synced failed: reason withheld Dec 09 11:29:35 crc kubenswrapper[4849]: [+]process-running ok Dec 09 11:29:35 crc kubenswrapper[4849]: healthz check failed Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.546775 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-flzss" podUID="c3d88dfe-fa31-4759-baa6-6c847eb53020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.568816 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dhd9l"] Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.569792 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhd9l" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.612597 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhd9l"] Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.641136 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5p75\" (UniqueName: \"kubernetes.io/projected/f19e5981-0356-4c0d-842b-211cfbef65b3-kube-api-access-c5p75\") pod \"certified-operators-md5l9\" (UID: \"f19e5981-0356-4c0d-842b-211cfbef65b3\") " pod="openshift-marketplace/certified-operators-md5l9" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.641188 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19e5981-0356-4c0d-842b-211cfbef65b3-catalog-content\") pod \"certified-operators-md5l9\" (UID: \"f19e5981-0356-4c0d-842b-211cfbef65b3\") " pod="openshift-marketplace/certified-operators-md5l9" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.641229 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19e5981-0356-4c0d-842b-211cfbef65b3-utilities\") pod \"certified-operators-md5l9\" (UID: \"f19e5981-0356-4c0d-842b-211cfbef65b3\") " pod="openshift-marketplace/certified-operators-md5l9" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.641261 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:35 crc kubenswrapper[4849]: E1209 11:29:35.641576 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:36.141562692 +0000 UTC m=+158.681447008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.642349 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19e5981-0356-4c0d-842b-211cfbef65b3-catalog-content\") pod \"certified-operators-md5l9\" (UID: \"f19e5981-0356-4c0d-842b-211cfbef65b3\") " pod="openshift-marketplace/certified-operators-md5l9" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.642606 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19e5981-0356-4c0d-842b-211cfbef65b3-utilities\") pod \"certified-operators-md5l9\" (UID: \"f19e5981-0356-4c0d-842b-211cfbef65b3\") " pod="openshift-marketplace/certified-operators-md5l9" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.679659 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.696577 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jlw2t" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.750997 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5p75\" (UniqueName: \"kubernetes.io/projected/f19e5981-0356-4c0d-842b-211cfbef65b3-kube-api-access-c5p75\") pod \"certified-operators-md5l9\" (UID: \"f19e5981-0356-4c0d-842b-211cfbef65b3\") " pod="openshift-marketplace/certified-operators-md5l9" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.752938 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:35 crc kubenswrapper[4849]: E1209 11:29:35.753092 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:36.253071749 +0000 UTC m=+158.792956065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.753152 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.753202 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-utilities\") pod \"community-operators-dhd9l\" (UID: \"2b6edbbd-c246-4696-ac45-efb4c27bbd1b\") " pod="openshift-marketplace/community-operators-dhd9l" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.753278 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-catalog-content\") pod \"community-operators-dhd9l\" (UID: \"2b6edbbd-c246-4696-ac45-efb4c27bbd1b\") " pod="openshift-marketplace/community-operators-dhd9l" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.753364 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mcp6\" (UniqueName: \"kubernetes.io/projected/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-kube-api-access-6mcp6\") pod \"community-operators-dhd9l\" (UID: \"2b6edbbd-c246-4696-ac45-efb4c27bbd1b\") " pod="openshift-marketplace/community-operators-dhd9l" Dec 09 11:29:35 crc kubenswrapper[4849]: E1209 11:29:35.753749 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:36.253732746 +0000 UTC m=+158.793617062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.768006 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlmnz" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.822610 4849 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjfc7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.822684 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" podUID="db29ce09-9dfc-44aa-9eec-3a431d33e0e6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.823312 4849 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjfc7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.823398 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" podUID="db29ce09-9dfc-44aa-9eec-3a431d33e0e6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.861101 4849 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-85jmr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.861167 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" podUID="1f27dee9-7157-455c-84d5-24c51b874b53" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.861791 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.861938 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-catalog-content\") pod \"community-operators-dhd9l\" (UID: \"2b6edbbd-c246-4696-ac45-efb4c27bbd1b\") " pod="openshift-marketplace/community-operators-dhd9l" Dec 09 11:29:35 crc kubenswrapper[4849]: E1209 11:29:35.861964 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:36.361936258 +0000 UTC m=+158.901820574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.862138 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mcp6\" (UniqueName: \"kubernetes.io/projected/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-kube-api-access-6mcp6\") pod \"community-operators-dhd9l\" (UID: \"2b6edbbd-c246-4696-ac45-efb4c27bbd1b\") " pod="openshift-marketplace/community-operators-dhd9l" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.862273 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.862316 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-catalog-content\") pod \"community-operators-dhd9l\" (UID: \"2b6edbbd-c246-4696-ac45-efb4c27bbd1b\") " pod="openshift-marketplace/community-operators-dhd9l" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.862332 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-utilities\") pod \"community-operators-dhd9l\" (UID: \"2b6edbbd-c246-4696-ac45-efb4c27bbd1b\") " pod="openshift-marketplace/community-operators-dhd9l" Dec 09 11:29:35 crc kubenswrapper[4849]: E1209 11:29:35.863470 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:36.363461507 +0000 UTC m=+158.903345823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.863942 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-utilities\") pod \"community-operators-dhd9l\" (UID: \"2b6edbbd-c246-4696-ac45-efb4c27bbd1b\") " pod="openshift-marketplace/community-operators-dhd9l" Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.963727 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:35 crc kubenswrapper[4849]: E1209 11:29:35.964146 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:36.464127047 +0000 UTC m=+159.004011363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:35 crc kubenswrapper[4849]: I1209 11:29:35.976612 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-md5l9" Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.022222 4849 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.028551 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mcp6\" (UniqueName: \"kubernetes.io/projected/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-kube-api-access-6mcp6\") pod \"community-operators-dhd9l\" (UID: \"2b6edbbd-c246-4696-ac45-efb4c27bbd1b\") " pod="openshift-marketplace/community-operators-dhd9l" Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.073620 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:36 crc kubenswrapper[4849]: E1209 11:29:36.073994 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:36.573977693 +0000 UTC m=+159.113862009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.174748 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:36 crc kubenswrapper[4849]: E1209 11:29:36.174859 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:36.674841409 +0000 UTC m=+159.214725725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.175020 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:36 crc kubenswrapper[4849]: E1209 11:29:36.175307 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:36.67529872 +0000 UTC m=+159.215183036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.207291 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhd9l" Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.276433 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:36 crc kubenswrapper[4849]: E1209 11:29:36.276783 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:36.776767752 +0000 UTC m=+159.316652078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.380306 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:36 crc kubenswrapper[4849]: E1209 11:29:36.383138 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:36.883125078 +0000 UTC m=+159.423009394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.481521 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:36 crc kubenswrapper[4849]: E1209 11:29:36.481961 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:36.981939981 +0000 UTC m=+159.521824297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.541651 4849 patch_prober.go:28] interesting pod/router-default-5444994796-flzss container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 11:29:36 crc kubenswrapper[4849]: [-]has-synced failed: reason withheld Dec 09 11:29:36 crc kubenswrapper[4849]: [+]process-running ok Dec 09 11:29:36 crc kubenswrapper[4849]: healthz check failed Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.541716 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-flzss" podUID="c3d88dfe-fa31-4759-baa6-6c847eb53020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.557449 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-824s2"] Dec 09 11:29:36 crc kubenswrapper[4849]: W1209 11:29:36.569696 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08448cd5_1dba_4274_ab2b_16d4ac6c0746.slice/crio-a5e424626868777ba0edd51d48208c9dc8951d54a6bae0d172792ff442449f0a WatchSource:0}: Error finding container a5e424626868777ba0edd51d48208c9dc8951d54a6bae0d172792ff442449f0a: Status 404 returned error can't find the container with id a5e424626868777ba0edd51d48208c9dc8951d54a6bae0d172792ff442449f0a Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.584873 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:36 crc kubenswrapper[4849]: E1209 11:29:36.585888 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:37.085874605 +0000 UTC m=+159.625758921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.686380 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:36 crc kubenswrapper[4849]: E1209 11:29:36.686805 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:37.186784472 +0000 UTC m=+159.726668788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.787328 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:36 crc kubenswrapper[4849]: E1209 11:29:36.787678 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:37.287665809 +0000 UTC m=+159.827550125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.880227 4849 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-09T11:29:36.022247666Z","Handler":null,"Name":""} Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.888142 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:36 crc kubenswrapper[4849]: E1209 11:29:36.890778 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 11:29:37.390748881 +0000 UTC m=+159.930633207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.905265 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlmnz"] Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.952841 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lmwmm"] Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.962643 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmwmm" Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.970539 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmwmm"] Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.982995 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.992397 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.992590 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33858531-f998-4cee-b45d-9d5cd8b45f2e-utilities\") pod \"redhat-marketplace-lmwmm\" (UID: \"33858531-f998-4cee-b45d-9d5cd8b45f2e\") " pod="openshift-marketplace/redhat-marketplace-lmwmm" Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.992622 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33858531-f998-4cee-b45d-9d5cd8b45f2e-catalog-content\") pod \"redhat-marketplace-lmwmm\" (UID: \"33858531-f998-4cee-b45d-9d5cd8b45f2e\") " pod="openshift-marketplace/redhat-marketplace-lmwmm" Dec 09 11:29:36 crc kubenswrapper[4849]: I1209 11:29:36.992638 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2477v\" (UniqueName: \"kubernetes.io/projected/33858531-f998-4cee-b45d-9d5cd8b45f2e-kube-api-access-2477v\") pod \"redhat-marketplace-lmwmm\" (UID: \"33858531-f998-4cee-b45d-9d5cd8b45f2e\") " pod="openshift-marketplace/redhat-marketplace-lmwmm" Dec 09 11:29:36 crc kubenswrapper[4849]: E1209 11:29:36.997922 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 11:29:37.497893297 +0000 UTC m=+160.037777613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhhqf" (UID: "ca549b95-b862-43e6-8540-595d05555d3c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.059573 4849 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.059886 4849 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.078272 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-md5l9"] Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.097551 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.097850 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2477v\" (UniqueName: \"kubernetes.io/projected/33858531-f998-4cee-b45d-9d5cd8b45f2e-kube-api-access-2477v\") pod \"redhat-marketplace-lmwmm\" (UID: \"33858531-f998-4cee-b45d-9d5cd8b45f2e\") " pod="openshift-marketplace/redhat-marketplace-lmwmm" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.098063 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33858531-f998-4cee-b45d-9d5cd8b45f2e-utilities\") pod \"redhat-marketplace-lmwmm\" (UID: \"33858531-f998-4cee-b45d-9d5cd8b45f2e\") " pod="openshift-marketplace/redhat-marketplace-lmwmm" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.098486 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33858531-f998-4cee-b45d-9d5cd8b45f2e-catalog-content\") pod \"redhat-marketplace-lmwmm\" (UID: \"33858531-f998-4cee-b45d-9d5cd8b45f2e\") " pod="openshift-marketplace/redhat-marketplace-lmwmm" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.098906 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33858531-f998-4cee-b45d-9d5cd8b45f2e-utilities\") pod \"redhat-marketplace-lmwmm\" (UID: \"33858531-f998-4cee-b45d-9d5cd8b45f2e\") " pod="openshift-marketplace/redhat-marketplace-lmwmm" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.099127 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33858531-f998-4cee-b45d-9d5cd8b45f2e-catalog-content\") pod \"redhat-marketplace-lmwmm\" (UID: \"33858531-f998-4cee-b45d-9d5cd8b45f2e\") " pod="openshift-marketplace/redhat-marketplace-lmwmm" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.135562 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhd9l"] Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.158564 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2477v\" (UniqueName: \"kubernetes.io/projected/33858531-f998-4cee-b45d-9d5cd8b45f2e-kube-api-access-2477v\") pod \"redhat-marketplace-lmwmm\" (UID: \"33858531-f998-4cee-b45d-9d5cd8b45f2e\") " pod="openshift-marketplace/redhat-marketplace-lmwmm" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.243848 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.272508 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlmnz" event={"ID":"2b3a8f6d-222d-4fee-a997-b30bb399b6be","Type":"ContainerStarted","Data":"3495251836b380e412e438e913527c5c60d8068561a05f34edd54e04ac36270f"} Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.273827 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhd9l" event={"ID":"2b6edbbd-c246-4696-ac45-efb4c27bbd1b","Type":"ContainerStarted","Data":"9f7ff2e5195d0f94021a176371509186c74d952553fcd03dd1b8f67953c72d5b"} Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.275971 4849 generic.go:334] "Generic (PLEG): container finished" podID="08448cd5-1dba-4274-ab2b-16d4ac6c0746" containerID="9954e8b2edb647fe88b08b32076617563cc648410652fa25006d2a6e2462c0eb" exitCode=0 Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.276134 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-824s2" event={"ID":"08448cd5-1dba-4274-ab2b-16d4ac6c0746","Type":"ContainerDied","Data":"9954e8b2edb647fe88b08b32076617563cc648410652fa25006d2a6e2462c0eb"} Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.276223 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-824s2" event={"ID":"08448cd5-1dba-4274-ab2b-16d4ac6c0746","Type":"ContainerStarted","Data":"a5e424626868777ba0edd51d48208c9dc8951d54a6bae0d172792ff442449f0a"} Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.280552 4849 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.285187 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md5l9" event={"ID":"f19e5981-0356-4c0d-842b-211cfbef65b3","Type":"ContainerStarted","Data":"8d50e5c56d99f0a418596005c3dcfdf7d5b152958e977cd61801f0f0b50ce773"} Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.300769 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.341361 4849 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.341402 4849 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.357033 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-27bsk"] Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.358367 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27bsk" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.367547 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmwmm" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.402900 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-27bsk"] Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.504400 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-catalog-content\") pod \"redhat-marketplace-27bsk\" (UID: \"91c773f3-2b45-488a-9b3c-5c0f2255f5cc\") " pod="openshift-marketplace/redhat-marketplace-27bsk" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.504889 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-utilities\") pod \"redhat-marketplace-27bsk\" (UID: \"91c773f3-2b45-488a-9b3c-5c0f2255f5cc\") " pod="openshift-marketplace/redhat-marketplace-27bsk" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.504922 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d99j4\" (UniqueName: \"kubernetes.io/projected/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-kube-api-access-d99j4\") pod \"redhat-marketplace-27bsk\" (UID: \"91c773f3-2b45-488a-9b3c-5c0f2255f5cc\") " pod="openshift-marketplace/redhat-marketplace-27bsk" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.530140 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhhqf\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.536627 4849 patch_prober.go:28] interesting pod/router-default-5444994796-flzss container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 11:29:37 crc kubenswrapper[4849]: [-]has-synced failed: reason withheld Dec 09 11:29:37 crc kubenswrapper[4849]: [+]process-running ok Dec 09 11:29:37 crc kubenswrapper[4849]: healthz check failed Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.536675 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-flzss" podUID="c3d88dfe-fa31-4759-baa6-6c847eb53020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.550691 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.605912 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-catalog-content\") pod \"redhat-marketplace-27bsk\" (UID: \"91c773f3-2b45-488a-9b3c-5c0f2255f5cc\") " pod="openshift-marketplace/redhat-marketplace-27bsk" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.606061 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-utilities\") pod \"redhat-marketplace-27bsk\" (UID: \"91c773f3-2b45-488a-9b3c-5c0f2255f5cc\") " pod="openshift-marketplace/redhat-marketplace-27bsk" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.606100 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d99j4\" (UniqueName: \"kubernetes.io/projected/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-kube-api-access-d99j4\") pod \"redhat-marketplace-27bsk\" (UID: \"91c773f3-2b45-488a-9b3c-5c0f2255f5cc\") " pod="openshift-marketplace/redhat-marketplace-27bsk" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.606772 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-catalog-content\") pod \"redhat-marketplace-27bsk\" (UID: \"91c773f3-2b45-488a-9b3c-5c0f2255f5cc\") " pod="openshift-marketplace/redhat-marketplace-27bsk" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.607056 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-utilities\") pod \"redhat-marketplace-27bsk\" (UID: \"91c773f3-2b45-488a-9b3c-5c0f2255f5cc\") " pod="openshift-marketplace/redhat-marketplace-27bsk" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.642354 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d99j4\" (UniqueName: \"kubernetes.io/projected/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-kube-api-access-d99j4\") pod \"redhat-marketplace-27bsk\" (UID: \"91c773f3-2b45-488a-9b3c-5c0f2255f5cc\") " pod="openshift-marketplace/redhat-marketplace-27bsk" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.671918 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27bsk" Dec 09 11:29:37 crc kubenswrapper[4849]: I1209 11:29:37.832679 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjfc7" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.158499 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmwmm"] Dec 09 11:29:38 crc kubenswrapper[4849]: W1209 11:29:38.175158 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33858531_f998_4cee_b45d_9d5cd8b45f2e.slice/crio-f519c9c4bffbdfedb0fab9e1ea29f3a0705756fae93f2a6d517e5df14ec33254 WatchSource:0}: Error finding container f519c9c4bffbdfedb0fab9e1ea29f3a0705756fae93f2a6d517e5df14ec33254: Status 404 returned error can't find the container with id f519c9c4bffbdfedb0fab9e1ea29f3a0705756fae93f2a6d517e5df14ec33254 Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.340547 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6zwl9"] Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.341544 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6zwl9" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.346096 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.363180 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6zwl9"] Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.496242 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmwmm" event={"ID":"33858531-f998-4cee-b45d-9d5cd8b45f2e","Type":"ContainerStarted","Data":"f519c9c4bffbdfedb0fab9e1ea29f3a0705756fae93f2a6d517e5df14ec33254"} Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.502322 4849 generic.go:334] "Generic (PLEG): container finished" podID="2b6edbbd-c246-4696-ac45-efb4c27bbd1b" containerID="0ce1b2c9a4fff65333816008444b0147b6e9292b2444f4be733c865061e21f95" exitCode=0 Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.509813 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhd9l" event={"ID":"2b6edbbd-c246-4696-ac45-efb4c27bbd1b","Type":"ContainerDied","Data":"0ce1b2c9a4fff65333816008444b0147b6e9292b2444f4be733c865061e21f95"} Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.568672 4849 patch_prober.go:28] interesting pod/router-default-5444994796-flzss container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 11:29:38 crc kubenswrapper[4849]: [-]has-synced failed: reason withheld Dec 09 11:29:38 crc kubenswrapper[4849]: [+]process-running ok Dec 09 11:29:38 crc kubenswrapper[4849]: healthz check failed Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.569103 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-flzss" podUID="c3d88dfe-fa31-4759-baa6-6c847eb53020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.592074 4849 generic.go:334] "Generic (PLEG): container finished" podID="2b3a8f6d-222d-4fee-a997-b30bb399b6be" containerID="bae6a375484a3a989a8e55d5a3d287eed9b93f9e1aa314e9410e6ab270cb8a05" exitCode=0 Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.595051 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48chx\" (UniqueName: \"kubernetes.io/projected/a64f1a61-70ff-4d3d-b033-e65b05414446-kube-api-access-48chx\") pod \"redhat-operators-6zwl9\" (UID: \"a64f1a61-70ff-4d3d-b033-e65b05414446\") " pod="openshift-marketplace/redhat-operators-6zwl9" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.595085 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a64f1a61-70ff-4d3d-b033-e65b05414446-utilities\") pod \"redhat-operators-6zwl9\" (UID: \"a64f1a61-70ff-4d3d-b033-e65b05414446\") " pod="openshift-marketplace/redhat-operators-6zwl9" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.595124 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a64f1a61-70ff-4d3d-b033-e65b05414446-catalog-content\") pod \"redhat-operators-6zwl9\" (UID: \"a64f1a61-70ff-4d3d-b033-e65b05414446\") " pod="openshift-marketplace/redhat-operators-6zwl9" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.607891 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.608575 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md5l9" event={"ID":"f19e5981-0356-4c0d-842b-211cfbef65b3","Type":"ContainerStarted","Data":"7ab7c76ff560ced24080bc23f8308a01eead157b4910dfff1e197fff923aa7d9"} Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.608612 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlmnz" event={"ID":"2b3a8f6d-222d-4fee-a997-b30bb399b6be","Type":"ContainerDied","Data":"bae6a375484a3a989a8e55d5a3d287eed9b93f9e1aa314e9410e6ab270cb8a05"} Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.700159 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48chx\" (UniqueName: \"kubernetes.io/projected/a64f1a61-70ff-4d3d-b033-e65b05414446-kube-api-access-48chx\") pod \"redhat-operators-6zwl9\" (UID: \"a64f1a61-70ff-4d3d-b033-e65b05414446\") " pod="openshift-marketplace/redhat-operators-6zwl9" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.700218 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a64f1a61-70ff-4d3d-b033-e65b05414446-utilities\") pod \"redhat-operators-6zwl9\" (UID: \"a64f1a61-70ff-4d3d-b033-e65b05414446\") " pod="openshift-marketplace/redhat-operators-6zwl9" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.700257 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a64f1a61-70ff-4d3d-b033-e65b05414446-catalog-content\") pod \"redhat-operators-6zwl9\" (UID: \"a64f1a61-70ff-4d3d-b033-e65b05414446\") " pod="openshift-marketplace/redhat-operators-6zwl9" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.700747 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a64f1a61-70ff-4d3d-b033-e65b05414446-catalog-content\") pod \"redhat-operators-6zwl9\" (UID: \"a64f1a61-70ff-4d3d-b033-e65b05414446\") " pod="openshift-marketplace/redhat-operators-6zwl9" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.701287 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a64f1a61-70ff-4d3d-b033-e65b05414446-utilities\") pod \"redhat-operators-6zwl9\" (UID: \"a64f1a61-70ff-4d3d-b033-e65b05414446\") " pod="openshift-marketplace/redhat-operators-6zwl9" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.754720 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fqwzq"] Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.763078 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48chx\" (UniqueName: \"kubernetes.io/projected/a64f1a61-70ff-4d3d-b033-e65b05414446-kube-api-access-48chx\") pod \"redhat-operators-6zwl9\" (UID: \"a64f1a61-70ff-4d3d-b033-e65b05414446\") " pod="openshift-marketplace/redhat-operators-6zwl9" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.770696 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fqwzq" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.803812 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fqwzq"] Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.853337 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.854222 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.882927 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.883271 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.888052 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.905504 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d6b1573-966a-4222-80b9-64e9753a2673-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1d6b1573-966a-4222-80b9-64e9753a2673\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.905552 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-utilities\") pod \"redhat-operators-fqwzq\" (UID: \"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a\") " pod="openshift-marketplace/redhat-operators-fqwzq" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.905586 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9wr4\" (UniqueName: \"kubernetes.io/projected/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-kube-api-access-w9wr4\") pod \"redhat-operators-fqwzq\" (UID: \"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a\") " pod="openshift-marketplace/redhat-operators-fqwzq" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.905627 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d6b1573-966a-4222-80b9-64e9753a2673-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1d6b1573-966a-4222-80b9-64e9753a2673\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.905702 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-catalog-content\") pod \"redhat-operators-fqwzq\" (UID: \"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a\") " pod="openshift-marketplace/redhat-operators-fqwzq" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.913920 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6zwl9" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.979698 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.981299 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.984856 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 09 11:29:38 crc kubenswrapper[4849]: I1209 11:29:38.985552 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.007174 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9wr4\" (UniqueName: \"kubernetes.io/projected/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-kube-api-access-w9wr4\") pod \"redhat-operators-fqwzq\" (UID: \"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a\") " pod="openshift-marketplace/redhat-operators-fqwzq" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.007237 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d6b1573-966a-4222-80b9-64e9753a2673-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1d6b1573-966a-4222-80b9-64e9753a2673\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.007315 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-catalog-content\") pod \"redhat-operators-fqwzq\" (UID: \"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a\") " pod="openshift-marketplace/redhat-operators-fqwzq" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.007347 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d6b1573-966a-4222-80b9-64e9753a2673-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1d6b1573-966a-4222-80b9-64e9753a2673\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.007377 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-utilities\") pod \"redhat-operators-fqwzq\" (UID: \"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a\") " pod="openshift-marketplace/redhat-operators-fqwzq" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.012443 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d6b1573-966a-4222-80b9-64e9753a2673-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1d6b1573-966a-4222-80b9-64e9753a2673\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.012935 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-catalog-content\") pod \"redhat-operators-fqwzq\" (UID: \"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a\") " pod="openshift-marketplace/redhat-operators-fqwzq" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.020886 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-utilities\") pod \"redhat-operators-fqwzq\" (UID: \"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a\") " pod="openshift-marketplace/redhat-operators-fqwzq" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.054084 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d6b1573-966a-4222-80b9-64e9753a2673-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1d6b1573-966a-4222-80b9-64e9753a2673\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.055058 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9wr4\" (UniqueName: \"kubernetes.io/projected/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-kube-api-access-w9wr4\") pod \"redhat-operators-fqwzq\" (UID: \"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a\") " pod="openshift-marketplace/redhat-operators-fqwzq" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.072086 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.100844 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fqwzq" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.114349 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fd5cb40-fd65-4a78-bf63-e381fcf20819-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9fd5cb40-fd65-4a78-bf63-e381fcf20819\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.114431 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fd5cb40-fd65-4a78-bf63-e381fcf20819-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9fd5cb40-fd65-4a78-bf63-e381fcf20819\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.191788 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.218532 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fd5cb40-fd65-4a78-bf63-e381fcf20819-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9fd5cb40-fd65-4a78-bf63-e381fcf20819\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.218733 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fd5cb40-fd65-4a78-bf63-e381fcf20819-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9fd5cb40-fd65-4a78-bf63-e381fcf20819\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.218980 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fd5cb40-fd65-4a78-bf63-e381fcf20819-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9fd5cb40-fd65-4a78-bf63-e381fcf20819\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.252293 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fd5cb40-fd65-4a78-bf63-e381fcf20819-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9fd5cb40-fd65-4a78-bf63-e381fcf20819\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.356885 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.421806 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-27bsk"] Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.535313 4849 patch_prober.go:28] interesting pod/router-default-5444994796-flzss container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 11:29:39 crc kubenswrapper[4849]: [-]has-synced failed: reason withheld Dec 09 11:29:39 crc kubenswrapper[4849]: [+]process-running ok Dec 09 11:29:39 crc kubenswrapper[4849]: healthz check failed Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.535360 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-flzss" podUID="c3d88dfe-fa31-4759-baa6-6c847eb53020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.564780 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lhhqf"] Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.646514 4849 generic.go:334] "Generic (PLEG): container finished" podID="f19e5981-0356-4c0d-842b-211cfbef65b3" containerID="7ab7c76ff560ced24080bc23f8308a01eead157b4910dfff1e197fff923aa7d9" exitCode=0 Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.646729 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md5l9" event={"ID":"f19e5981-0356-4c0d-842b-211cfbef65b3","Type":"ContainerDied","Data":"7ab7c76ff560ced24080bc23f8308a01eead157b4910dfff1e197fff923aa7d9"} Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.674086 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27bsk" event={"ID":"91c773f3-2b45-488a-9b3c-5c0f2255f5cc","Type":"ContainerStarted","Data":"bc5901a0d0fc6df3fd95672031396ca22ab3c1555251aefb9934eb52eefc3a9a"} Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.675581 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6zwl9"] Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.687902 4849 generic.go:334] "Generic (PLEG): container finished" podID="33858531-f998-4cee-b45d-9d5cd8b45f2e" containerID="6d1b01738ae9a7dd4a5262b3be62dced38ee844dfd3586be22a38babe78c6232" exitCode=0 Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.690110 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmwmm" event={"ID":"33858531-f998-4cee-b45d-9d5cd8b45f2e","Type":"ContainerDied","Data":"6d1b01738ae9a7dd4a5262b3be62dced38ee844dfd3586be22a38babe78c6232"} Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.701304 4849 generic.go:334] "Generic (PLEG): container finished" podID="8e9eff9a-660a-450b-9c63-c473634e7d0a" containerID="2546e724fda2a396640e78ad94cd0ea55a32a8b524b627eaf64db6dc13ca49cb" exitCode=0 Dec 09 11:29:39 crc kubenswrapper[4849]: I1209 11:29:39.706889 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" event={"ID":"8e9eff9a-660a-450b-9c63-c473634e7d0a","Type":"ContainerDied","Data":"2546e724fda2a396640e78ad94cd0ea55a32a8b524b627eaf64db6dc13ca49cb"} Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.093615 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fqwzq"] Dec 09 11:29:40 crc kubenswrapper[4849]: W1209 11:29:40.125632 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ea602d1_ec9a_4a2b_8b4f_935d9ff4514a.slice/crio-9c9d9748cd168675a3eb4bd8bfaa84701ff3d395b2435c19fe8c482cc42c4315 WatchSource:0}: Error finding container 9c9d9748cd168675a3eb4bd8bfaa84701ff3d395b2435c19fe8c482cc42c4315: Status 404 returned error can't find the container with id 9c9d9748cd168675a3eb4bd8bfaa84701ff3d395b2435c19fe8c482cc42c4315 Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.352570 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.501312 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.543601 4849 patch_prober.go:28] interesting pod/router-default-5444994796-flzss container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 11:29:40 crc kubenswrapper[4849]: [-]has-synced failed: reason withheld Dec 09 11:29:40 crc kubenswrapper[4849]: [+]process-running ok Dec 09 11:29:40 crc kubenswrapper[4849]: healthz check failed Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.543659 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-flzss" podUID="c3d88dfe-fa31-4759-baa6-6c847eb53020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.795502 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9fd5cb40-fd65-4a78-bf63-e381fcf20819","Type":"ContainerStarted","Data":"6984e5b8408554b8dfb538e012b4a35c4b2c107744850b5b5b1983b8a064bc5d"} Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.798665 4849 generic.go:334] "Generic (PLEG): container finished" podID="a64f1a61-70ff-4d3d-b033-e65b05414446" containerID="cbe41a0cef098434b13b307618f3dd87f0bdfbe24ca84f624b3edbe3fc9ca650" exitCode=0 Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.798931 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zwl9" event={"ID":"a64f1a61-70ff-4d3d-b033-e65b05414446","Type":"ContainerDied","Data":"cbe41a0cef098434b13b307618f3dd87f0bdfbe24ca84f624b3edbe3fc9ca650"} Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.799002 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zwl9" event={"ID":"a64f1a61-70ff-4d3d-b033-e65b05414446","Type":"ContainerStarted","Data":"9bb5ea59c700947931bf597dcb72f6fd7316b5d3c3d2e4cbf930efdad5ee34ff"} Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.869082 4849 generic.go:334] "Generic (PLEG): container finished" podID="91c773f3-2b45-488a-9b3c-5c0f2255f5cc" containerID="b2298c8d0dadc4a075f43fce30c6eeceacae0294beb7193336ddec2611abe1a2" exitCode=0 Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.869167 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27bsk" event={"ID":"91c773f3-2b45-488a-9b3c-5c0f2255f5cc","Type":"ContainerDied","Data":"b2298c8d0dadc4a075f43fce30c6eeceacae0294beb7193336ddec2611abe1a2"} Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.874841 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1d6b1573-966a-4222-80b9-64e9753a2673","Type":"ContainerStarted","Data":"50577636de5005b9c2e61a256f16bce9b7e2c3b00ba27f338f1e52257cd85eed"} Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.889453 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" event={"ID":"ca549b95-b862-43e6-8540-595d05555d3c","Type":"ContainerStarted","Data":"ede20529f6934427bfb4605ee3dc029a92b0dabc8ad5d4da47af7bd293b2b770"} Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.889498 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" event={"ID":"ca549b95-b862-43e6-8540-595d05555d3c","Type":"ContainerStarted","Data":"0ac7e0521152da869cdb6f2e787d8f8f6a42dca9295fd5765bbc921fe8e9afd3"} Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.890132 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.893212 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fqwzq" event={"ID":"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a","Type":"ContainerStarted","Data":"e04c0a0a256bb424ba20b490087758859193122db6c575ac23f1781011eaff99"} Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.893254 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fqwzq" event={"ID":"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a","Type":"ContainerStarted","Data":"9c9d9748cd168675a3eb4bd8bfaa84701ff3d395b2435c19fe8c482cc42c4315"} Dec 09 11:29:40 crc kubenswrapper[4849]: I1209 11:29:40.979550 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" podStartSLOduration=141.979513228 podStartE2EDuration="2m21.979513228s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:40.978829451 +0000 UTC m=+163.518713787" watchObservedRunningTime="2025-12-09 11:29:40.979513228 +0000 UTC m=+163.519397554" Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.477094 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-h72pd" Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.558660 4849 patch_prober.go:28] interesting pod/router-default-5444994796-flzss container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 11:29:41 crc kubenswrapper[4849]: [-]has-synced failed: reason withheld Dec 09 11:29:41 crc kubenswrapper[4849]: [+]process-running ok Dec 09 11:29:41 crc kubenswrapper[4849]: healthz check failed Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.558741 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-flzss" podUID="c3d88dfe-fa31-4759-baa6-6c847eb53020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.818385 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.892475 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e9eff9a-660a-450b-9c63-c473634e7d0a-secret-volume\") pod \"8e9eff9a-660a-450b-9c63-c473634e7d0a\" (UID: \"8e9eff9a-660a-450b-9c63-c473634e7d0a\") " Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.892606 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwjcj\" (UniqueName: \"kubernetes.io/projected/8e9eff9a-660a-450b-9c63-c473634e7d0a-kube-api-access-pwjcj\") pod \"8e9eff9a-660a-450b-9c63-c473634e7d0a\" (UID: \"8e9eff9a-660a-450b-9c63-c473634e7d0a\") " Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.892637 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e9eff9a-660a-450b-9c63-c473634e7d0a-config-volume\") pod \"8e9eff9a-660a-450b-9c63-c473634e7d0a\" (UID: \"8e9eff9a-660a-450b-9c63-c473634e7d0a\") " Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.892846 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs\") pod \"network-metrics-daemon-qcffq\" (UID: \"fa5f421b-d486-4b0d-a615-7887df025c00\") " pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.907293 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa5f421b-d486-4b0d-a615-7887df025c00-metrics-certs\") pod \"network-metrics-daemon-qcffq\" (UID: \"fa5f421b-d486-4b0d-a615-7887df025c00\") " pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.907706 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9eff9a-660a-450b-9c63-c473634e7d0a-kube-api-access-pwjcj" (OuterVolumeSpecName: "kube-api-access-pwjcj") pod "8e9eff9a-660a-450b-9c63-c473634e7d0a" (UID: "8e9eff9a-660a-450b-9c63-c473634e7d0a"). InnerVolumeSpecName "kube-api-access-pwjcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.913938 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e9eff9a-660a-450b-9c63-c473634e7d0a-config-volume" (OuterVolumeSpecName: "config-volume") pod "8e9eff9a-660a-450b-9c63-c473634e7d0a" (UID: "8e9eff9a-660a-450b-9c63-c473634e7d0a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.918440 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e9eff9a-660a-450b-9c63-c473634e7d0a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8e9eff9a-660a-450b-9c63-c473634e7d0a" (UID: "8e9eff9a-660a-450b-9c63-c473634e7d0a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.974748 4849 generic.go:334] "Generic (PLEG): container finished" podID="5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" containerID="e04c0a0a256bb424ba20b490087758859193122db6c575ac23f1781011eaff99" exitCode=0 Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.974806 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fqwzq" event={"ID":"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a","Type":"ContainerDied","Data":"e04c0a0a256bb424ba20b490087758859193122db6c575ac23f1781011eaff99"} Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.991212 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" event={"ID":"8e9eff9a-660a-450b-9c63-c473634e7d0a","Type":"ContainerDied","Data":"e01ba1b12ac19424a22ccbc58fe973e143203d6eb767528b7229fb0235d94466"} Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.991250 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e01ba1b12ac19424a22ccbc58fe973e143203d6eb767528b7229fb0235d94466" Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.991306 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j" Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.995061 4849 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e9eff9a-660a-450b-9c63-c473634e7d0a-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.996013 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwjcj\" (UniqueName: \"kubernetes.io/projected/8e9eff9a-660a-450b-9c63-c473634e7d0a-kube-api-access-pwjcj\") on node \"crc\" DevicePath \"\"" Dec 09 11:29:41 crc kubenswrapper[4849]: I1209 11:29:41.996110 4849 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e9eff9a-660a-450b-9c63-c473634e7d0a-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 11:29:42 crc kubenswrapper[4849]: I1209 11:29:42.010786 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9fd5cb40-fd65-4a78-bf63-e381fcf20819","Type":"ContainerStarted","Data":"c4057398b056834b24d7b1c5ad24ef2c69a8cc9873e71b39576bd67c38a1bbb0"} Dec 09 11:29:42 crc kubenswrapper[4849]: I1209 11:29:42.182312 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qcffq" Dec 09 11:29:42 crc kubenswrapper[4849]: I1209 11:29:42.729347 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:42 crc kubenswrapper[4849]: I1209 11:29:42.743225 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-flzss" Dec 09 11:29:42 crc kubenswrapper[4849]: I1209 11:29:42.746403 4849 patch_prober.go:28] interesting pod/console-f9d7485db-l6kz7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 09 11:29:42 crc kubenswrapper[4849]: I1209 11:29:42.746475 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-l6kz7" podUID="1e6507b4-4ff1-4fc1-afee-9e6c2e909908" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 09 11:29:42 crc kubenswrapper[4849]: I1209 11:29:42.980137 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:29:42 crc kubenswrapper[4849]: I1209 11:29:42.980204 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:29:42 crc kubenswrapper[4849]: I1209 11:29:42.980383 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:29:42 crc kubenswrapper[4849]: I1209 11:29:42.980421 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:29:43 crc kubenswrapper[4849]: I1209 11:29:43.069309 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1d6b1573-966a-4222-80b9-64e9753a2673","Type":"ContainerStarted","Data":"06889153d0394da41018b5e5218880b6495b638a6f2b5ced49b65999bc556f59"} Dec 09 11:29:43 crc kubenswrapper[4849]: I1209 11:29:43.090878 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=5.09085797 podStartE2EDuration="5.09085797s" podCreationTimestamp="2025-12-09 11:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:43.089682 +0000 UTC m=+165.629566326" watchObservedRunningTime="2025-12-09 11:29:43.09085797 +0000 UTC m=+165.630742306" Dec 09 11:29:43 crc kubenswrapper[4849]: I1209 11:29:43.519322 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" Dec 09 11:29:43 crc kubenswrapper[4849]: I1209 11:29:43.609892 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=5.6098667639999995 podStartE2EDuration="5.609866764s" podCreationTimestamp="2025-12-09 11:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:43.147345038 +0000 UTC m=+165.687229354" watchObservedRunningTime="2025-12-09 11:29:43.609866764 +0000 UTC m=+166.149751080" Dec 09 11:29:43 crc kubenswrapper[4849]: I1209 11:29:43.611965 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-9zsgd" Dec 09 11:29:43 crc kubenswrapper[4849]: I1209 11:29:43.709332 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qcffq"] Dec 09 11:29:43 crc kubenswrapper[4849]: I1209 11:29:43.933205 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-85jmr" Dec 09 11:29:44 crc kubenswrapper[4849]: I1209 11:29:44.162795 4849 generic.go:334] "Generic (PLEG): container finished" podID="9fd5cb40-fd65-4a78-bf63-e381fcf20819" containerID="c4057398b056834b24d7b1c5ad24ef2c69a8cc9873e71b39576bd67c38a1bbb0" exitCode=0 Dec 09 11:29:44 crc kubenswrapper[4849]: I1209 11:29:44.162872 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9fd5cb40-fd65-4a78-bf63-e381fcf20819","Type":"ContainerDied","Data":"c4057398b056834b24d7b1c5ad24ef2c69a8cc9873e71b39576bd67c38a1bbb0"} Dec 09 11:29:44 crc kubenswrapper[4849]: I1209 11:29:44.202850 4849 generic.go:334] "Generic (PLEG): container finished" podID="1d6b1573-966a-4222-80b9-64e9753a2673" containerID="06889153d0394da41018b5e5218880b6495b638a6f2b5ced49b65999bc556f59" exitCode=0 Dec 09 11:29:44 crc kubenswrapper[4849]: I1209 11:29:44.202957 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1d6b1573-966a-4222-80b9-64e9753a2673","Type":"ContainerDied","Data":"06889153d0394da41018b5e5218880b6495b638a6f2b5ced49b65999bc556f59"} Dec 09 11:29:44 crc kubenswrapper[4849]: I1209 11:29:44.227157 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qcffq" event={"ID":"fa5f421b-d486-4b0d-a615-7887df025c00","Type":"ContainerStarted","Data":"a7d181143c72f82f5f03f4356d8de5cdddd5d4a3ef8fcf07ce29728cd92ec8c9"} Dec 09 11:29:45 crc kubenswrapper[4849]: I1209 11:29:45.301061 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qcffq" event={"ID":"fa5f421b-d486-4b0d-a615-7887df025c00","Type":"ContainerStarted","Data":"dce3565b51f37cdd97546c332b6e00fd43f6accc1e754d5f39c55c5593c9b314"} Dec 09 11:29:45 crc kubenswrapper[4849]: I1209 11:29:45.988874 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.079029 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fd5cb40-fd65-4a78-bf63-e381fcf20819-kubelet-dir\") pod \"9fd5cb40-fd65-4a78-bf63-e381fcf20819\" (UID: \"9fd5cb40-fd65-4a78-bf63-e381fcf20819\") " Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.079129 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fd5cb40-fd65-4a78-bf63-e381fcf20819-kube-api-access\") pod \"9fd5cb40-fd65-4a78-bf63-e381fcf20819\" (UID: \"9fd5cb40-fd65-4a78-bf63-e381fcf20819\") " Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.081139 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fd5cb40-fd65-4a78-bf63-e381fcf20819-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9fd5cb40-fd65-4a78-bf63-e381fcf20819" (UID: "9fd5cb40-fd65-4a78-bf63-e381fcf20819"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.102038 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fd5cb40-fd65-4a78-bf63-e381fcf20819-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9fd5cb40-fd65-4a78-bf63-e381fcf20819" (UID: "9fd5cb40-fd65-4a78-bf63-e381fcf20819"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.180436 4849 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fd5cb40-fd65-4a78-bf63-e381fcf20819-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.180478 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fd5cb40-fd65-4a78-bf63-e381fcf20819-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.221514 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.355876 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1d6b1573-966a-4222-80b9-64e9753a2673","Type":"ContainerDied","Data":"50577636de5005b9c2e61a256f16bce9b7e2c3b00ba27f338f1e52257cd85eed"} Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.355926 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50577636de5005b9c2e61a256f16bce9b7e2c3b00ba27f338f1e52257cd85eed" Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.355968 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.360327 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qcffq" event={"ID":"fa5f421b-d486-4b0d-a615-7887df025c00","Type":"ContainerStarted","Data":"cd406e9e8e54250e68e17e04e17a32037f36c27fa4299b7d64364c6c45c61b1e"} Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.364434 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9fd5cb40-fd65-4a78-bf63-e381fcf20819","Type":"ContainerDied","Data":"6984e5b8408554b8dfb538e012b4a35c4b2c107744850b5b5b1983b8a064bc5d"} Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.364466 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6984e5b8408554b8dfb538e012b4a35c4b2c107744850b5b5b1983b8a064bc5d" Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.364527 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.383248 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d6b1573-966a-4222-80b9-64e9753a2673-kubelet-dir\") pod \"1d6b1573-966a-4222-80b9-64e9753a2673\" (UID: \"1d6b1573-966a-4222-80b9-64e9753a2673\") " Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.383332 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d6b1573-966a-4222-80b9-64e9753a2673-kube-api-access\") pod \"1d6b1573-966a-4222-80b9-64e9753a2673\" (UID: \"1d6b1573-966a-4222-80b9-64e9753a2673\") " Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.392523 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d6b1573-966a-4222-80b9-64e9753a2673-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1d6b1573-966a-4222-80b9-64e9753a2673" (UID: "1d6b1573-966a-4222-80b9-64e9753a2673"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.415479 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d6b1573-966a-4222-80b9-64e9753a2673-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1d6b1573-966a-4222-80b9-64e9753a2673" (UID: "1d6b1573-966a-4222-80b9-64e9753a2673"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.487792 4849 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d6b1573-966a-4222-80b9-64e9753a2673-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 11:29:46 crc kubenswrapper[4849]: I1209 11:29:46.487826 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d6b1573-966a-4222-80b9-64e9753a2673-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 11:29:51 crc kubenswrapper[4849]: I1209 11:29:51.133503 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:29:51 crc kubenswrapper[4849]: I1209 11:29:51.133835 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:29:52 crc kubenswrapper[4849]: I1209 11:29:52.750457 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:52 crc kubenswrapper[4849]: I1209 11:29:52.754538 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:29:52 crc kubenswrapper[4849]: I1209 11:29:52.771894 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-qcffq" podStartSLOduration=153.771873494 podStartE2EDuration="2m33.771873494s" podCreationTimestamp="2025-12-09 11:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:29:46.396027263 +0000 UTC m=+168.935911579" watchObservedRunningTime="2025-12-09 11:29:52.771873494 +0000 UTC m=+175.311757810" Dec 09 11:29:52 crc kubenswrapper[4849]: I1209 11:29:52.943797 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:29:52 crc kubenswrapper[4849]: I1209 11:29:52.943959 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:29:52 crc kubenswrapper[4849]: I1209 11:29:52.944016 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-74c2r" Dec 09 11:29:52 crc kubenswrapper[4849]: I1209 11:29:52.944726 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:29:52 crc kubenswrapper[4849]: I1209 11:29:52.944749 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:29:52 crc kubenswrapper[4849]: I1209 11:29:52.945956 4849 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"fdc5364bd4d8aeb5fdbcf7425c8f6aa3b3de22701951e4caabdd85eac8be7f9b"} pod="openshift-console/downloads-7954f5f757-74c2r" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 09 11:29:52 crc kubenswrapper[4849]: I1209 11:29:52.946034 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" containerID="cri-o://fdc5364bd4d8aeb5fdbcf7425c8f6aa3b3de22701951e4caabdd85eac8be7f9b" gracePeriod=2 Dec 09 11:29:52 crc kubenswrapper[4849]: I1209 11:29:52.946663 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:29:52 crc kubenswrapper[4849]: I1209 11:29:52.946681 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:29:54 crc kubenswrapper[4849]: I1209 11:29:54.726744 4849 generic.go:334] "Generic (PLEG): container finished" podID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerID="fdc5364bd4d8aeb5fdbcf7425c8f6aa3b3de22701951e4caabdd85eac8be7f9b" exitCode=0 Dec 09 11:29:54 crc kubenswrapper[4849]: I1209 11:29:54.726870 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-74c2r" event={"ID":"8948a613-56f3-4a89-adb7-2c4a2262f2ee","Type":"ContainerDied","Data":"fdc5364bd4d8aeb5fdbcf7425c8f6aa3b3de22701951e4caabdd85eac8be7f9b"} Dec 09 11:29:57 crc kubenswrapper[4849]: I1209 11:29:57.721963 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.163392 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s"] Dec 09 11:30:00 crc kubenswrapper[4849]: E1209 11:30:00.164246 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd5cb40-fd65-4a78-bf63-e381fcf20819" containerName="pruner" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.164296 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd5cb40-fd65-4a78-bf63-e381fcf20819" containerName="pruner" Dec 09 11:30:00 crc kubenswrapper[4849]: E1209 11:30:00.164314 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d6b1573-966a-4222-80b9-64e9753a2673" containerName="pruner" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.164322 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d6b1573-966a-4222-80b9-64e9753a2673" containerName="pruner" Dec 09 11:30:00 crc kubenswrapper[4849]: E1209 11:30:00.164334 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9eff9a-660a-450b-9c63-c473634e7d0a" containerName="collect-profiles" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.164342 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9eff9a-660a-450b-9c63-c473634e7d0a" containerName="collect-profiles" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.164602 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9eff9a-660a-450b-9c63-c473634e7d0a" containerName="collect-profiles" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.164634 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd5cb40-fd65-4a78-bf63-e381fcf20819" containerName="pruner" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.164646 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d6b1573-966a-4222-80b9-64e9753a2673" containerName="pruner" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.165202 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.167080 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.167187 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.172262 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s"] Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.291105 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/136005bd-018c-43bc-b768-5f036f7e2c40-secret-volume\") pod \"collect-profiles-29421330-tsz9s\" (UID: \"136005bd-018c-43bc-b768-5f036f7e2c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.291503 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjljr\" (UniqueName: \"kubernetes.io/projected/136005bd-018c-43bc-b768-5f036f7e2c40-kube-api-access-bjljr\") pod \"collect-profiles-29421330-tsz9s\" (UID: \"136005bd-018c-43bc-b768-5f036f7e2c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.291588 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/136005bd-018c-43bc-b768-5f036f7e2c40-config-volume\") pod \"collect-profiles-29421330-tsz9s\" (UID: \"136005bd-018c-43bc-b768-5f036f7e2c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.392397 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/136005bd-018c-43bc-b768-5f036f7e2c40-config-volume\") pod \"collect-profiles-29421330-tsz9s\" (UID: \"136005bd-018c-43bc-b768-5f036f7e2c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.392496 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/136005bd-018c-43bc-b768-5f036f7e2c40-secret-volume\") pod \"collect-profiles-29421330-tsz9s\" (UID: \"136005bd-018c-43bc-b768-5f036f7e2c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.393251 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjljr\" (UniqueName: \"kubernetes.io/projected/136005bd-018c-43bc-b768-5f036f7e2c40-kube-api-access-bjljr\") pod \"collect-profiles-29421330-tsz9s\" (UID: \"136005bd-018c-43bc-b768-5f036f7e2c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.393317 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/136005bd-018c-43bc-b768-5f036f7e2c40-config-volume\") pod \"collect-profiles-29421330-tsz9s\" (UID: \"136005bd-018c-43bc-b768-5f036f7e2c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.404100 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/136005bd-018c-43bc-b768-5f036f7e2c40-secret-volume\") pod \"collect-profiles-29421330-tsz9s\" (UID: \"136005bd-018c-43bc-b768-5f036f7e2c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.431010 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjljr\" (UniqueName: \"kubernetes.io/projected/136005bd-018c-43bc-b768-5f036f7e2c40-kube-api-access-bjljr\") pod \"collect-profiles-29421330-tsz9s\" (UID: \"136005bd-018c-43bc-b768-5f036f7e2c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" Dec 09 11:30:00 crc kubenswrapper[4849]: I1209 11:30:00.674913 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" Dec 09 11:30:02 crc kubenswrapper[4849]: I1209 11:30:02.945086 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:30:02 crc kubenswrapper[4849]: I1209 11:30:02.945694 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:30:03 crc kubenswrapper[4849]: I1209 11:30:03.318436 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n8jqm" Dec 09 11:30:06 crc kubenswrapper[4849]: I1209 11:30:06.951489 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 11:30:12 crc kubenswrapper[4849]: I1209 11:30:12.942315 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:30:12 crc kubenswrapper[4849]: I1209 11:30:12.942904 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:30:14 crc kubenswrapper[4849]: I1209 11:30:14.831478 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 11:30:14 crc kubenswrapper[4849]: I1209 11:30:14.833713 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 11:30:14 crc kubenswrapper[4849]: I1209 11:30:14.837357 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 09 11:30:14 crc kubenswrapper[4849]: I1209 11:30:14.837610 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 09 11:30:14 crc kubenswrapper[4849]: I1209 11:30:14.840333 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 11:30:14 crc kubenswrapper[4849]: I1209 11:30:14.958948 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f089c4f-99c1-45ff-86fc-90e0deb153ea-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7f089c4f-99c1-45ff-86fc-90e0deb153ea\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 11:30:14 crc kubenswrapper[4849]: I1209 11:30:14.959008 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f089c4f-99c1-45ff-86fc-90e0deb153ea-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7f089c4f-99c1-45ff-86fc-90e0deb153ea\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 11:30:15 crc kubenswrapper[4849]: I1209 11:30:15.060367 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f089c4f-99c1-45ff-86fc-90e0deb153ea-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7f089c4f-99c1-45ff-86fc-90e0deb153ea\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 11:30:15 crc kubenswrapper[4849]: I1209 11:30:15.060453 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f089c4f-99c1-45ff-86fc-90e0deb153ea-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7f089c4f-99c1-45ff-86fc-90e0deb153ea\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 11:30:15 crc kubenswrapper[4849]: I1209 11:30:15.061317 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f089c4f-99c1-45ff-86fc-90e0deb153ea-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7f089c4f-99c1-45ff-86fc-90e0deb153ea\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 11:30:15 crc kubenswrapper[4849]: I1209 11:30:15.117928 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f089c4f-99c1-45ff-86fc-90e0deb153ea-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7f089c4f-99c1-45ff-86fc-90e0deb153ea\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 11:30:15 crc kubenswrapper[4849]: I1209 11:30:15.187465 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 11:30:19 crc kubenswrapper[4849]: I1209 11:30:19.417947 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 11:30:19 crc kubenswrapper[4849]: I1209 11:30:19.418926 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 11:30:19 crc kubenswrapper[4849]: I1209 11:30:19.432082 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 11:30:19 crc kubenswrapper[4849]: I1209 11:30:19.620438 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f458026c-1433-4a58-b921-1088b8e9a509-var-lock\") pod \"installer-9-crc\" (UID: \"f458026c-1433-4a58-b921-1088b8e9a509\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 11:30:19 crc kubenswrapper[4849]: I1209 11:30:19.620509 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f458026c-1433-4a58-b921-1088b8e9a509-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f458026c-1433-4a58-b921-1088b8e9a509\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 11:30:19 crc kubenswrapper[4849]: I1209 11:30:19.620549 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f458026c-1433-4a58-b921-1088b8e9a509-kube-api-access\") pod \"installer-9-crc\" (UID: \"f458026c-1433-4a58-b921-1088b8e9a509\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 11:30:19 crc kubenswrapper[4849]: I1209 11:30:19.729832 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f458026c-1433-4a58-b921-1088b8e9a509-kube-api-access\") pod \"installer-9-crc\" (UID: \"f458026c-1433-4a58-b921-1088b8e9a509\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 11:30:19 crc kubenswrapper[4849]: I1209 11:30:19.730021 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f458026c-1433-4a58-b921-1088b8e9a509-var-lock\") pod \"installer-9-crc\" (UID: \"f458026c-1433-4a58-b921-1088b8e9a509\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 11:30:19 crc kubenswrapper[4849]: I1209 11:30:19.730184 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f458026c-1433-4a58-b921-1088b8e9a509-var-lock\") pod \"installer-9-crc\" (UID: \"f458026c-1433-4a58-b921-1088b8e9a509\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 11:30:19 crc kubenswrapper[4849]: I1209 11:30:19.732356 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f458026c-1433-4a58-b921-1088b8e9a509-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f458026c-1433-4a58-b921-1088b8e9a509\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 11:30:19 crc kubenswrapper[4849]: I1209 11:30:19.732468 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f458026c-1433-4a58-b921-1088b8e9a509-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f458026c-1433-4a58-b921-1088b8e9a509\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 11:30:19 crc kubenswrapper[4849]: I1209 11:30:19.750600 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f458026c-1433-4a58-b921-1088b8e9a509-kube-api-access\") pod \"installer-9-crc\" (UID: \"f458026c-1433-4a58-b921-1088b8e9a509\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 11:30:19 crc kubenswrapper[4849]: I1209 11:30:19.750913 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 11:30:21 crc kubenswrapper[4849]: I1209 11:30:21.132590 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:30:21 crc kubenswrapper[4849]: I1209 11:30:21.133063 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:30:21 crc kubenswrapper[4849]: I1209 11:30:21.133106 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:30:21 crc kubenswrapper[4849]: I1209 11:30:21.133702 4849 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71"} pod="openshift-machine-config-operator/machine-config-daemon-89kpx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 11:30:21 crc kubenswrapper[4849]: I1209 11:30:21.133751 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" containerID="cri-o://e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71" gracePeriod=600 Dec 09 11:30:21 crc kubenswrapper[4849]: E1209 11:30:21.322912 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 09 11:30:21 crc kubenswrapper[4849]: E1209 11:30:21.323638 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5kktf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tlmnz_openshift-marketplace(2b3a8f6d-222d-4fee-a997-b30bb399b6be): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 11:30:21 crc kubenswrapper[4849]: E1209 11:30:21.324857 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tlmnz" podUID="2b3a8f6d-222d-4fee-a997-b30bb399b6be" Dec 09 11:30:21 crc kubenswrapper[4849]: E1209 11:30:21.342109 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 09 11:30:21 crc kubenswrapper[4849]: E1209 11:30:21.342272 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6mcp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dhd9l_openshift-marketplace(2b6edbbd-c246-4696-ac45-efb4c27bbd1b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 11:30:21 crc kubenswrapper[4849]: E1209 11:30:21.343502 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dhd9l" podUID="2b6edbbd-c246-4696-ac45-efb4c27bbd1b" Dec 09 11:30:22 crc kubenswrapper[4849]: I1209 11:30:22.200769 4849 generic.go:334] "Generic (PLEG): container finished" podID="157c6f6c-042b-4da3-934e-a08474e56486" containerID="e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71" exitCode=0 Dec 09 11:30:22 crc kubenswrapper[4849]: I1209 11:30:22.200833 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerDied","Data":"e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71"} Dec 09 11:30:22 crc kubenswrapper[4849]: I1209 11:30:22.942721 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:30:22 crc kubenswrapper[4849]: I1209 11:30:22.943304 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:30:25 crc kubenswrapper[4849]: E1209 11:30:25.339944 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dhd9l" podUID="2b6edbbd-c246-4696-ac45-efb4c27bbd1b" Dec 09 11:30:25 crc kubenswrapper[4849]: E1209 11:30:25.341994 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tlmnz" podUID="2b3a8f6d-222d-4fee-a997-b30bb399b6be" Dec 09 11:30:25 crc kubenswrapper[4849]: E1209 11:30:25.422602 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 09 11:30:25 crc kubenswrapper[4849]: E1209 11:30:25.423128 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48chx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6zwl9_openshift-marketplace(a64f1a61-70ff-4d3d-b033-e65b05414446): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 11:30:25 crc kubenswrapper[4849]: E1209 11:30:25.424528 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6zwl9" podUID="a64f1a61-70ff-4d3d-b033-e65b05414446" Dec 09 11:30:26 crc kubenswrapper[4849]: E1209 11:30:26.978679 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6zwl9" podUID="a64f1a61-70ff-4d3d-b033-e65b05414446" Dec 09 11:30:27 crc kubenswrapper[4849]: E1209 11:30:27.057459 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 09 11:30:27 crc kubenswrapper[4849]: E1209 11:30:27.057663 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5p75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-md5l9_openshift-marketplace(f19e5981-0356-4c0d-842b-211cfbef65b3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 11:30:27 crc kubenswrapper[4849]: E1209 11:30:27.059044 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-md5l9" podUID="f19e5981-0356-4c0d-842b-211cfbef65b3" Dec 09 11:30:27 crc kubenswrapper[4849]: E1209 11:30:27.079937 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 09 11:30:27 crc kubenswrapper[4849]: E1209 11:30:27.080131 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl42m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-824s2_openshift-marketplace(08448cd5-1dba-4274-ab2b-16d4ac6c0746): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 11:30:27 crc kubenswrapper[4849]: E1209 11:30:27.082242 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-824s2" podUID="08448cd5-1dba-4274-ab2b-16d4ac6c0746" Dec 09 11:30:29 crc kubenswrapper[4849]: E1209 11:30:29.789917 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-824s2" podUID="08448cd5-1dba-4274-ab2b-16d4ac6c0746" Dec 09 11:30:29 crc kubenswrapper[4849]: E1209 11:30:29.793065 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-md5l9" podUID="f19e5981-0356-4c0d-842b-211cfbef65b3" Dec 09 11:30:30 crc kubenswrapper[4849]: E1209 11:30:30.036971 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 09 11:30:30 crc kubenswrapper[4849]: E1209 11:30:30.037629 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d99j4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-27bsk_openshift-marketplace(91c773f3-2b45-488a-9b3c-5c0f2255f5cc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 11:30:30 crc kubenswrapper[4849]: E1209 11:30:30.040762 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-27bsk" podUID="91c773f3-2b45-488a-9b3c-5c0f2255f5cc" Dec 09 11:30:30 crc kubenswrapper[4849]: E1209 11:30:30.064622 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 09 11:30:30 crc kubenswrapper[4849]: E1209 11:30:30.064835 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w9wr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fqwzq_openshift-marketplace(5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 11:30:30 crc kubenswrapper[4849]: E1209 11:30:30.065994 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-fqwzq" podUID="5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" Dec 09 11:30:30 crc kubenswrapper[4849]: I1209 11:30:30.090781 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s"] Dec 09 11:30:30 crc kubenswrapper[4849]: E1209 11:30:30.120460 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 09 11:30:30 crc kubenswrapper[4849]: E1209 11:30:30.120576 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2477v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lmwmm_openshift-marketplace(33858531-f998-4cee-b45d-9d5cd8b45f2e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 11:30:30 crc kubenswrapper[4849]: E1209 11:30:30.122523 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lmwmm" podUID="33858531-f998-4cee-b45d-9d5cd8b45f2e" Dec 09 11:30:30 crc kubenswrapper[4849]: I1209 11:30:30.279342 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" event={"ID":"136005bd-018c-43bc-b768-5f036f7e2c40","Type":"ContainerStarted","Data":"08861a3218a6ddf3d4cc770e1e01ee126a83a99a83a7001071319f2487d1cbce"} Dec 09 11:30:30 crc kubenswrapper[4849]: I1209 11:30:30.281635 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-74c2r" event={"ID":"8948a613-56f3-4a89-adb7-2c4a2262f2ee","Type":"ContainerStarted","Data":"5d820bb3fbfaffc31199d47afe753f960769436947f0693e25782832e997fba8"} Dec 09 11:30:30 crc kubenswrapper[4849]: I1209 11:30:30.282655 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-74c2r" Dec 09 11:30:30 crc kubenswrapper[4849]: I1209 11:30:30.285184 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:30:30 crc kubenswrapper[4849]: I1209 11:30:30.285235 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:30:30 crc kubenswrapper[4849]: I1209 11:30:30.298989 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerStarted","Data":"579c1698bf3789148ad5988a944ebf95e9935ab2868988359a420af98bca3008"} Dec 09 11:30:30 crc kubenswrapper[4849]: E1209 11:30:30.305224 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lmwmm" podUID="33858531-f998-4cee-b45d-9d5cd8b45f2e" Dec 09 11:30:30 crc kubenswrapper[4849]: E1209 11:30:30.308619 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-27bsk" podUID="91c773f3-2b45-488a-9b3c-5c0f2255f5cc" Dec 09 11:30:30 crc kubenswrapper[4849]: E1209 11:30:30.308828 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fqwzq" podUID="5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" Dec 09 11:30:30 crc kubenswrapper[4849]: I1209 11:30:30.376212 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 11:30:30 crc kubenswrapper[4849]: I1209 11:30:30.581671 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 11:30:31 crc kubenswrapper[4849]: I1209 11:30:31.304185 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f458026c-1433-4a58-b921-1088b8e9a509","Type":"ContainerStarted","Data":"86f0159f9f7da912e56286471b1c2886603b87b314f02ee1fbda249f6c67d46e"} Dec 09 11:30:31 crc kubenswrapper[4849]: I1209 11:30:31.306303 4849 generic.go:334] "Generic (PLEG): container finished" podID="136005bd-018c-43bc-b768-5f036f7e2c40" containerID="96db2af45ff8acb81b86ef373ccf1adb3af357e38745c8cf08077d88580ee321" exitCode=0 Dec 09 11:30:31 crc kubenswrapper[4849]: I1209 11:30:31.306721 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" event={"ID":"136005bd-018c-43bc-b768-5f036f7e2c40","Type":"ContainerDied","Data":"96db2af45ff8acb81b86ef373ccf1adb3af357e38745c8cf08077d88580ee321"} Dec 09 11:30:31 crc kubenswrapper[4849]: I1209 11:30:31.307731 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7f089c4f-99c1-45ff-86fc-90e0deb153ea","Type":"ContainerStarted","Data":"051cba64ba88c97da29ad46a3edf94a9c6642beeab5e6170fc57d99ec66fcad0"} Dec 09 11:30:31 crc kubenswrapper[4849]: I1209 11:30:31.308795 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:30:31 crc kubenswrapper[4849]: I1209 11:30:31.308836 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.316271 4849 generic.go:334] "Generic (PLEG): container finished" podID="7f089c4f-99c1-45ff-86fc-90e0deb153ea" containerID="d4acc06c8256bfb33276dfe12c6a31168ef095ec1085b5ea91ed8d51b46715dc" exitCode=0 Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.316326 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7f089c4f-99c1-45ff-86fc-90e0deb153ea","Type":"ContainerDied","Data":"d4acc06c8256bfb33276dfe12c6a31168ef095ec1085b5ea91ed8d51b46715dc"} Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.319780 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f458026c-1433-4a58-b921-1088b8e9a509","Type":"ContainerStarted","Data":"e4e407d3154b818a06cb66e15da50241d23ab28a6d3aba0de06a9012afc1069d"} Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.320263 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.320335 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.355031 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=13.355008879 podStartE2EDuration="13.355008879s" podCreationTimestamp="2025-12-09 11:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:30:32.354233799 +0000 UTC m=+214.894118125" watchObservedRunningTime="2025-12-09 11:30:32.355008879 +0000 UTC m=+214.894893195" Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.577460 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.708721 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/136005bd-018c-43bc-b768-5f036f7e2c40-config-volume\") pod \"136005bd-018c-43bc-b768-5f036f7e2c40\" (UID: \"136005bd-018c-43bc-b768-5f036f7e2c40\") " Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.708913 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjljr\" (UniqueName: \"kubernetes.io/projected/136005bd-018c-43bc-b768-5f036f7e2c40-kube-api-access-bjljr\") pod \"136005bd-018c-43bc-b768-5f036f7e2c40\" (UID: \"136005bd-018c-43bc-b768-5f036f7e2c40\") " Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.708945 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/136005bd-018c-43bc-b768-5f036f7e2c40-secret-volume\") pod \"136005bd-018c-43bc-b768-5f036f7e2c40\" (UID: \"136005bd-018c-43bc-b768-5f036f7e2c40\") " Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.710426 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/136005bd-018c-43bc-b768-5f036f7e2c40-config-volume" (OuterVolumeSpecName: "config-volume") pod "136005bd-018c-43bc-b768-5f036f7e2c40" (UID: "136005bd-018c-43bc-b768-5f036f7e2c40"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.722557 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/136005bd-018c-43bc-b768-5f036f7e2c40-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "136005bd-018c-43bc-b768-5f036f7e2c40" (UID: "136005bd-018c-43bc-b768-5f036f7e2c40"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.723561 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/136005bd-018c-43bc-b768-5f036f7e2c40-kube-api-access-bjljr" (OuterVolumeSpecName: "kube-api-access-bjljr") pod "136005bd-018c-43bc-b768-5f036f7e2c40" (UID: "136005bd-018c-43bc-b768-5f036f7e2c40"). InnerVolumeSpecName "kube-api-access-bjljr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.810882 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjljr\" (UniqueName: \"kubernetes.io/projected/136005bd-018c-43bc-b768-5f036f7e2c40-kube-api-access-bjljr\") on node \"crc\" DevicePath \"\"" Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.810913 4849 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/136005bd-018c-43bc-b768-5f036f7e2c40-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.810923 4849 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/136005bd-018c-43bc-b768-5f036f7e2c40-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.942862 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.942907 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.942938 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:30:32 crc kubenswrapper[4849]: I1209 11:30:32.942968 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:30:33 crc kubenswrapper[4849]: I1209 11:30:33.324980 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" event={"ID":"136005bd-018c-43bc-b768-5f036f7e2c40","Type":"ContainerDied","Data":"08861a3218a6ddf3d4cc770e1e01ee126a83a99a83a7001071319f2487d1cbce"} Dec 09 11:30:33 crc kubenswrapper[4849]: I1209 11:30:33.325839 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08861a3218a6ddf3d4cc770e1e01ee126a83a99a83a7001071319f2487d1cbce" Dec 09 11:30:33 crc kubenswrapper[4849]: I1209 11:30:33.325605 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s" Dec 09 11:30:34 crc kubenswrapper[4849]: I1209 11:30:34.039483 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 11:30:34 crc kubenswrapper[4849]: I1209 11:30:34.077585 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f089c4f-99c1-45ff-86fc-90e0deb153ea-kube-api-access\") pod \"7f089c4f-99c1-45ff-86fc-90e0deb153ea\" (UID: \"7f089c4f-99c1-45ff-86fc-90e0deb153ea\") " Dec 09 11:30:34 crc kubenswrapper[4849]: I1209 11:30:34.077667 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f089c4f-99c1-45ff-86fc-90e0deb153ea-kubelet-dir\") pod \"7f089c4f-99c1-45ff-86fc-90e0deb153ea\" (UID: \"7f089c4f-99c1-45ff-86fc-90e0deb153ea\") " Dec 09 11:30:34 crc kubenswrapper[4849]: I1209 11:30:34.077927 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f089c4f-99c1-45ff-86fc-90e0deb153ea-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7f089c4f-99c1-45ff-86fc-90e0deb153ea" (UID: "7f089c4f-99c1-45ff-86fc-90e0deb153ea"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:30:34 crc kubenswrapper[4849]: I1209 11:30:34.083981 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f089c4f-99c1-45ff-86fc-90e0deb153ea-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7f089c4f-99c1-45ff-86fc-90e0deb153ea" (UID: "7f089c4f-99c1-45ff-86fc-90e0deb153ea"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:30:34 crc kubenswrapper[4849]: I1209 11:30:34.183293 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f089c4f-99c1-45ff-86fc-90e0deb153ea-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 11:30:34 crc kubenswrapper[4849]: I1209 11:30:34.183342 4849 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f089c4f-99c1-45ff-86fc-90e0deb153ea-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 11:30:34 crc kubenswrapper[4849]: I1209 11:30:34.331627 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7f089c4f-99c1-45ff-86fc-90e0deb153ea","Type":"ContainerDied","Data":"051cba64ba88c97da29ad46a3edf94a9c6642beeab5e6170fc57d99ec66fcad0"} Dec 09 11:30:34 crc kubenswrapper[4849]: I1209 11:30:34.332881 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="051cba64ba88c97da29ad46a3edf94a9c6642beeab5e6170fc57d99ec66fcad0" Dec 09 11:30:34 crc kubenswrapper[4849]: I1209 11:30:34.331682 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 11:30:42 crc kubenswrapper[4849]: I1209 11:30:42.942582 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:30:42 crc kubenswrapper[4849]: I1209 11:30:42.943972 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:30:42 crc kubenswrapper[4849]: I1209 11:30:42.942777 4849 patch_prober.go:28] interesting pod/downloads-7954f5f757-74c2r container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 11:30:42 crc kubenswrapper[4849]: I1209 11:30:42.944152 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-74c2r" podUID="8948a613-56f3-4a89-adb7-2c4a2262f2ee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 11:30:46 crc kubenswrapper[4849]: I1209 11:30:46.602169 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zwl9" event={"ID":"a64f1a61-70ff-4d3d-b033-e65b05414446","Type":"ContainerStarted","Data":"6e50aac9eeb28ae7d04d818059943b1d05d9838a2154fbb310808048ebbc54a5"} Dec 09 11:30:46 crc kubenswrapper[4849]: I1209 11:30:46.604625 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhd9l" event={"ID":"2b6edbbd-c246-4696-ac45-efb4c27bbd1b","Type":"ContainerStarted","Data":"1c26c3ec6f7ae25a85e4ad15bcde058d561115b3a78d5ed03e81d26b86f6bc8a"} Dec 09 11:30:48 crc kubenswrapper[4849]: I1209 11:30:48.615437 4849 generic.go:334] "Generic (PLEG): container finished" podID="2b6edbbd-c246-4696-ac45-efb4c27bbd1b" containerID="1c26c3ec6f7ae25a85e4ad15bcde058d561115b3a78d5ed03e81d26b86f6bc8a" exitCode=0 Dec 09 11:30:48 crc kubenswrapper[4849]: I1209 11:30:48.615493 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhd9l" event={"ID":"2b6edbbd-c246-4696-ac45-efb4c27bbd1b","Type":"ContainerDied","Data":"1c26c3ec6f7ae25a85e4ad15bcde058d561115b3a78d5ed03e81d26b86f6bc8a"} Dec 09 11:30:50 crc kubenswrapper[4849]: I1209 11:30:50.626530 4849 generic.go:334] "Generic (PLEG): container finished" podID="a64f1a61-70ff-4d3d-b033-e65b05414446" containerID="6e50aac9eeb28ae7d04d818059943b1d05d9838a2154fbb310808048ebbc54a5" exitCode=0 Dec 09 11:30:50 crc kubenswrapper[4849]: I1209 11:30:50.626618 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zwl9" event={"ID":"a64f1a61-70ff-4d3d-b033-e65b05414446","Type":"ContainerDied","Data":"6e50aac9eeb28ae7d04d818059943b1d05d9838a2154fbb310808048ebbc54a5"} Dec 09 11:30:52 crc kubenswrapper[4849]: I1209 11:30:52.946956 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-74c2r" Dec 09 11:30:54 crc kubenswrapper[4849]: I1209 11:30:54.385575 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-25rtx"] Dec 09 11:30:54 crc kubenswrapper[4849]: I1209 11:30:54.753896 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zwl9" event={"ID":"a64f1a61-70ff-4d3d-b033-e65b05414446","Type":"ContainerStarted","Data":"7d316078fe0bf6fce21884f792abd120830db31532d778754f26faa895181bd7"} Dec 09 11:30:54 crc kubenswrapper[4849]: I1209 11:30:54.756189 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27bsk" event={"ID":"91c773f3-2b45-488a-9b3c-5c0f2255f5cc","Type":"ContainerStarted","Data":"03008738697b56b1b964951c82479ea553f7704b8da4ac1ffc5ace100a3b5d68"} Dec 09 11:30:54 crc kubenswrapper[4849]: I1209 11:30:54.766649 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmwmm" event={"ID":"33858531-f998-4cee-b45d-9d5cd8b45f2e","Type":"ContainerStarted","Data":"c05f480ebab02084603329af26bd4daec7e28b57546a88d16cd4aad2a735c895"} Dec 09 11:30:54 crc kubenswrapper[4849]: I1209 11:30:54.773218 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhd9l" event={"ID":"2b6edbbd-c246-4696-ac45-efb4c27bbd1b","Type":"ContainerStarted","Data":"90ff3a9e9ac2a8e40f279e0eb5888b10b142f567e3fe7768e0f235c0ff570735"} Dec 09 11:30:54 crc kubenswrapper[4849]: I1209 11:30:54.775312 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-824s2" event={"ID":"08448cd5-1dba-4274-ab2b-16d4ac6c0746","Type":"ContainerStarted","Data":"e86102f27e8d1a9768a70dca93658809349396c2deaee48b25023c2ebd469361"} Dec 09 11:30:54 crc kubenswrapper[4849]: I1209 11:30:54.777495 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fqwzq" event={"ID":"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a","Type":"ContainerStarted","Data":"e4935c2a90235ce9380dea8a6c57d898e6717eba62cba4a1dc9baec74aba7f21"} Dec 09 11:30:54 crc kubenswrapper[4849]: I1209 11:30:54.804563 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md5l9" event={"ID":"f19e5981-0356-4c0d-842b-211cfbef65b3","Type":"ContainerStarted","Data":"7e4d6fbc0e2c975ca4f17543be394d6a95ba76e279ebb0e399ee32ed3cedb7cc"} Dec 09 11:30:54 crc kubenswrapper[4849]: I1209 11:30:54.807245 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlmnz" event={"ID":"2b3a8f6d-222d-4fee-a997-b30bb399b6be","Type":"ContainerStarted","Data":"5b4c546614f3c29ec92d8409502a6478a8f355ba08aac7f225f4b7189f201dfd"} Dec 09 11:30:54 crc kubenswrapper[4849]: I1209 11:30:54.878370 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6zwl9" podStartSLOduration=3.9814014909999997 podStartE2EDuration="1m16.878348978s" podCreationTimestamp="2025-12-09 11:29:38 +0000 UTC" firstStartedPulling="2025-12-09 11:29:40.800783482 +0000 UTC m=+163.340667798" lastFinishedPulling="2025-12-09 11:30:53.697730979 +0000 UTC m=+236.237615285" observedRunningTime="2025-12-09 11:30:54.802351651 +0000 UTC m=+237.342235977" watchObservedRunningTime="2025-12-09 11:30:54.878348978 +0000 UTC m=+237.418233294" Dec 09 11:30:54 crc kubenswrapper[4849]: I1209 11:30:54.985883 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dhd9l" podStartSLOduration=4.786024216 podStartE2EDuration="1m19.985859349s" podCreationTimestamp="2025-12-09 11:29:35 +0000 UTC" firstStartedPulling="2025-12-09 11:29:38.512373255 +0000 UTC m=+161.052257581" lastFinishedPulling="2025-12-09 11:30:53.712208398 +0000 UTC m=+236.252092714" observedRunningTime="2025-12-09 11:30:54.983476858 +0000 UTC m=+237.523361174" watchObservedRunningTime="2025-12-09 11:30:54.985859349 +0000 UTC m=+237.525743665" Dec 09 11:30:55 crc kubenswrapper[4849]: I1209 11:30:55.824076 4849 generic.go:334] "Generic (PLEG): container finished" podID="33858531-f998-4cee-b45d-9d5cd8b45f2e" containerID="c05f480ebab02084603329af26bd4daec7e28b57546a88d16cd4aad2a735c895" exitCode=0 Dec 09 11:30:55 crc kubenswrapper[4849]: I1209 11:30:55.824145 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmwmm" event={"ID":"33858531-f998-4cee-b45d-9d5cd8b45f2e","Type":"ContainerDied","Data":"c05f480ebab02084603329af26bd4daec7e28b57546a88d16cd4aad2a735c895"} Dec 09 11:30:55 crc kubenswrapper[4849]: I1209 11:30:55.828327 4849 generic.go:334] "Generic (PLEG): container finished" podID="08448cd5-1dba-4274-ab2b-16d4ac6c0746" containerID="e86102f27e8d1a9768a70dca93658809349396c2deaee48b25023c2ebd469361" exitCode=0 Dec 09 11:30:55 crc kubenswrapper[4849]: I1209 11:30:55.828433 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-824s2" event={"ID":"08448cd5-1dba-4274-ab2b-16d4ac6c0746","Type":"ContainerDied","Data":"e86102f27e8d1a9768a70dca93658809349396c2deaee48b25023c2ebd469361"} Dec 09 11:30:55 crc kubenswrapper[4849]: I1209 11:30:55.832144 4849 generic.go:334] "Generic (PLEG): container finished" podID="f19e5981-0356-4c0d-842b-211cfbef65b3" containerID="7e4d6fbc0e2c975ca4f17543be394d6a95ba76e279ebb0e399ee32ed3cedb7cc" exitCode=0 Dec 09 11:30:55 crc kubenswrapper[4849]: I1209 11:30:55.832192 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md5l9" event={"ID":"f19e5981-0356-4c0d-842b-211cfbef65b3","Type":"ContainerDied","Data":"7e4d6fbc0e2c975ca4f17543be394d6a95ba76e279ebb0e399ee32ed3cedb7cc"} Dec 09 11:30:55 crc kubenswrapper[4849]: I1209 11:30:55.834677 4849 generic.go:334] "Generic (PLEG): container finished" podID="2b3a8f6d-222d-4fee-a997-b30bb399b6be" containerID="5b4c546614f3c29ec92d8409502a6478a8f355ba08aac7f225f4b7189f201dfd" exitCode=0 Dec 09 11:30:55 crc kubenswrapper[4849]: I1209 11:30:55.834720 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlmnz" event={"ID":"2b3a8f6d-222d-4fee-a997-b30bb399b6be","Type":"ContainerDied","Data":"5b4c546614f3c29ec92d8409502a6478a8f355ba08aac7f225f4b7189f201dfd"} Dec 09 11:30:55 crc kubenswrapper[4849]: I1209 11:30:55.837355 4849 generic.go:334] "Generic (PLEG): container finished" podID="91c773f3-2b45-488a-9b3c-5c0f2255f5cc" containerID="03008738697b56b1b964951c82479ea553f7704b8da4ac1ffc5ace100a3b5d68" exitCode=0 Dec 09 11:30:55 crc kubenswrapper[4849]: I1209 11:30:55.837395 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27bsk" event={"ID":"91c773f3-2b45-488a-9b3c-5c0f2255f5cc","Type":"ContainerDied","Data":"03008738697b56b1b964951c82479ea553f7704b8da4ac1ffc5ace100a3b5d68"} Dec 09 11:30:56 crc kubenswrapper[4849]: I1209 11:30:56.208210 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dhd9l" Dec 09 11:30:56 crc kubenswrapper[4849]: I1209 11:30:56.208259 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dhd9l" Dec 09 11:30:56 crc kubenswrapper[4849]: I1209 11:30:56.853451 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27bsk" event={"ID":"91c773f3-2b45-488a-9b3c-5c0f2255f5cc","Type":"ContainerStarted","Data":"a14609a2f4b778a6854501810fe6db1a043afa41e5a38ea585a67473c7448ae5"} Dec 09 11:30:56 crc kubenswrapper[4849]: I1209 11:30:56.879489 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmwmm" event={"ID":"33858531-f998-4cee-b45d-9d5cd8b45f2e","Type":"ContainerStarted","Data":"6a81b169eb804253a0e59069ac80c104a16e2dea1a56c58d927bf9a6300985cd"} Dec 09 11:30:56 crc kubenswrapper[4849]: I1209 11:30:56.892190 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-27bsk" podStartSLOduration=4.466295285 podStartE2EDuration="1m19.892167039s" podCreationTimestamp="2025-12-09 11:29:37 +0000 UTC" firstStartedPulling="2025-12-09 11:29:40.871022919 +0000 UTC m=+163.410907235" lastFinishedPulling="2025-12-09 11:30:56.296894673 +0000 UTC m=+238.836778989" observedRunningTime="2025-12-09 11:30:56.885790527 +0000 UTC m=+239.425674843" watchObservedRunningTime="2025-12-09 11:30:56.892167039 +0000 UTC m=+239.432051365" Dec 09 11:30:56 crc kubenswrapper[4849]: I1209 11:30:56.897673 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-824s2" event={"ID":"08448cd5-1dba-4274-ab2b-16d4ac6c0746","Type":"ContainerStarted","Data":"02ce18f65b930de7ec726ea9e7ea09730230689c66c6c78b5ed2477a93e82f69"} Dec 09 11:30:56 crc kubenswrapper[4849]: I1209 11:30:56.916554 4849 generic.go:334] "Generic (PLEG): container finished" podID="5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" containerID="e4935c2a90235ce9380dea8a6c57d898e6717eba62cba4a1dc9baec74aba7f21" exitCode=0 Dec 09 11:30:56 crc kubenswrapper[4849]: I1209 11:30:56.916640 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fqwzq" event={"ID":"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a","Type":"ContainerDied","Data":"e4935c2a90235ce9380dea8a6c57d898e6717eba62cba4a1dc9baec74aba7f21"} Dec 09 11:30:56 crc kubenswrapper[4849]: I1209 11:30:56.917235 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lmwmm" podStartSLOduration=4.151929525 podStartE2EDuration="1m20.917218108s" podCreationTimestamp="2025-12-09 11:29:36 +0000 UTC" firstStartedPulling="2025-12-09 11:29:39.778051653 +0000 UTC m=+162.317935969" lastFinishedPulling="2025-12-09 11:30:56.543340246 +0000 UTC m=+239.083224552" observedRunningTime="2025-12-09 11:30:56.915871804 +0000 UTC m=+239.455756130" watchObservedRunningTime="2025-12-09 11:30:56.917218108 +0000 UTC m=+239.457102424" Dec 09 11:30:56 crc kubenswrapper[4849]: I1209 11:30:56.930304 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md5l9" event={"ID":"f19e5981-0356-4c0d-842b-211cfbef65b3","Type":"ContainerStarted","Data":"c831de8f7a68c1871a00af900b4e6899468811cafc7844aef0e2cea3ed545181"} Dec 09 11:30:56 crc kubenswrapper[4849]: I1209 11:30:56.968432 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-md5l9" podStartSLOduration=5.048263149 podStartE2EDuration="1m21.968393533s" podCreationTimestamp="2025-12-09 11:29:35 +0000 UTC" firstStartedPulling="2025-12-09 11:29:39.663326115 +0000 UTC m=+162.203210431" lastFinishedPulling="2025-12-09 11:30:56.583456499 +0000 UTC m=+239.123340815" observedRunningTime="2025-12-09 11:30:56.966759531 +0000 UTC m=+239.506643847" watchObservedRunningTime="2025-12-09 11:30:56.968393533 +0000 UTC m=+239.508277849" Dec 09 11:30:56 crc kubenswrapper[4849]: I1209 11:30:56.971192 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-824s2" podStartSLOduration=3.842511935 podStartE2EDuration="1m22.971168314s" podCreationTimestamp="2025-12-09 11:29:34 +0000 UTC" firstStartedPulling="2025-12-09 11:29:37.28024819 +0000 UTC m=+159.820132506" lastFinishedPulling="2025-12-09 11:30:56.408904569 +0000 UTC m=+238.948788885" observedRunningTime="2025-12-09 11:30:56.942514963 +0000 UTC m=+239.482399279" watchObservedRunningTime="2025-12-09 11:30:56.971168314 +0000 UTC m=+239.511052630" Dec 09 11:30:57 crc kubenswrapper[4849]: I1209 11:30:57.367811 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lmwmm" Dec 09 11:30:57 crc kubenswrapper[4849]: I1209 11:30:57.367906 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lmwmm" Dec 09 11:30:57 crc kubenswrapper[4849]: I1209 11:30:57.391501 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dhd9l" podUID="2b6edbbd-c246-4696-ac45-efb4c27bbd1b" containerName="registry-server" probeResult="failure" output=< Dec 09 11:30:57 crc kubenswrapper[4849]: timeout: failed to connect service ":50051" within 1s Dec 09 11:30:57 crc kubenswrapper[4849]: > Dec 09 11:30:57 crc kubenswrapper[4849]: I1209 11:30:57.673374 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-27bsk" Dec 09 11:30:57 crc kubenswrapper[4849]: I1209 11:30:57.673472 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-27bsk" Dec 09 11:30:57 crc kubenswrapper[4849]: I1209 11:30:57.939840 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlmnz" event={"ID":"2b3a8f6d-222d-4fee-a997-b30bb399b6be","Type":"ContainerStarted","Data":"33f3afe09da0d6e353bea37949e502fb91885297a295fd08953ac15d9f514477"} Dec 09 11:30:57 crc kubenswrapper[4849]: I1209 11:30:57.971917 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tlmnz" podStartSLOduration=4.868176384 podStartE2EDuration="1m22.971898966s" podCreationTimestamp="2025-12-09 11:29:35 +0000 UTC" firstStartedPulling="2025-12-09 11:29:38.615681673 +0000 UTC m=+161.155565989" lastFinishedPulling="2025-12-09 11:30:56.719404255 +0000 UTC m=+239.259288571" observedRunningTime="2025-12-09 11:30:57.970918972 +0000 UTC m=+240.510803288" watchObservedRunningTime="2025-12-09 11:30:57.971898966 +0000 UTC m=+240.511783282" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.256319 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-824s2"] Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.256861 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-824s2" podUID="08448cd5-1dba-4274-ab2b-16d4ac6c0746" containerName="registry-server" containerID="cri-o://02ce18f65b930de7ec726ea9e7ea09730230689c66c6c78b5ed2477a93e82f69" gracePeriod=30 Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.267865 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-md5l9"] Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.268219 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-md5l9" podUID="f19e5981-0356-4c0d-842b-211cfbef65b3" containerName="registry-server" containerID="cri-o://c831de8f7a68c1871a00af900b4e6899468811cafc7844aef0e2cea3ed545181" gracePeriod=30 Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.278673 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhd9l"] Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.279140 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dhd9l" podUID="2b6edbbd-c246-4696-ac45-efb4c27bbd1b" containerName="registry-server" containerID="cri-o://90ff3a9e9ac2a8e40f279e0eb5888b10b142f567e3fe7768e0f235c0ff570735" gracePeriod=30 Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.295456 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tlmnz"] Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.301763 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7xksm"] Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.301952 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" podUID="3e7c4a38-1f7c-4cb1-b757-8250869e1597" containerName="marketplace-operator" containerID="cri-o://5846adfb8ea863f5f0091497887ec3d3c7e646604df8e22c60d5ef8cee398f05" gracePeriod=30 Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.313869 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-27bsk"] Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.331184 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpwrl"] Dec 09 11:30:58 crc kubenswrapper[4849]: E1209 11:30:58.331529 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136005bd-018c-43bc-b768-5f036f7e2c40" containerName="collect-profiles" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.331555 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="136005bd-018c-43bc-b768-5f036f7e2c40" containerName="collect-profiles" Dec 09 11:30:58 crc kubenswrapper[4849]: E1209 11:30:58.331585 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f089c4f-99c1-45ff-86fc-90e0deb153ea" containerName="pruner" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.331594 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f089c4f-99c1-45ff-86fc-90e0deb153ea" containerName="pruner" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.331729 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="136005bd-018c-43bc-b768-5f036f7e2c40" containerName="collect-profiles" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.331742 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f089c4f-99c1-45ff-86fc-90e0deb153ea" containerName="pruner" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.332259 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.339569 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmwmm"] Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.344538 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6zwl9"] Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.344806 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6zwl9" podUID="a64f1a61-70ff-4d3d-b033-e65b05414446" containerName="registry-server" containerID="cri-o://7d316078fe0bf6fce21884f792abd120830db31532d778754f26faa895181bd7" gracePeriod=30 Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.348074 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fqwzq"] Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.372589 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpwrl"] Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.442012 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmms2\" (UniqueName: \"kubernetes.io/projected/6eb652dc-111b-4544-a20e-0c98d451825d-kube-api-access-jmms2\") pod \"marketplace-operator-79b997595-fpwrl\" (UID: \"6eb652dc-111b-4544-a20e-0c98d451825d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.442343 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6eb652dc-111b-4544-a20e-0c98d451825d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fpwrl\" (UID: \"6eb652dc-111b-4544-a20e-0c98d451825d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.442517 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6eb652dc-111b-4544-a20e-0c98d451825d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fpwrl\" (UID: \"6eb652dc-111b-4544-a20e-0c98d451825d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.545152 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6eb652dc-111b-4544-a20e-0c98d451825d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fpwrl\" (UID: \"6eb652dc-111b-4544-a20e-0c98d451825d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.545225 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmms2\" (UniqueName: \"kubernetes.io/projected/6eb652dc-111b-4544-a20e-0c98d451825d-kube-api-access-jmms2\") pod \"marketplace-operator-79b997595-fpwrl\" (UID: \"6eb652dc-111b-4544-a20e-0c98d451825d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.545265 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6eb652dc-111b-4544-a20e-0c98d451825d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fpwrl\" (UID: \"6eb652dc-111b-4544-a20e-0c98d451825d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.547700 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6eb652dc-111b-4544-a20e-0c98d451825d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fpwrl\" (UID: \"6eb652dc-111b-4544-a20e-0c98d451825d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.557880 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-lmwmm" podUID="33858531-f998-4cee-b45d-9d5cd8b45f2e" containerName="registry-server" probeResult="failure" output=< Dec 09 11:30:58 crc kubenswrapper[4849]: timeout: failed to connect service ":50051" within 1s Dec 09 11:30:58 crc kubenswrapper[4849]: > Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.570774 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6eb652dc-111b-4544-a20e-0c98d451825d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fpwrl\" (UID: \"6eb652dc-111b-4544-a20e-0c98d451825d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.577282 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmms2\" (UniqueName: \"kubernetes.io/projected/6eb652dc-111b-4544-a20e-0c98d451825d-kube-api-access-jmms2\") pod \"marketplace-operator-79b997595-fpwrl\" (UID: \"6eb652dc-111b-4544-a20e-0c98d451825d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.634142 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.765532 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-27bsk" podUID="91c773f3-2b45-488a-9b3c-5c0f2255f5cc" containerName="registry-server" probeResult="failure" output=< Dec 09 11:30:58 crc kubenswrapper[4849]: timeout: failed to connect service ":50051" within 1s Dec 09 11:30:58 crc kubenswrapper[4849]: > Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.811463 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-824s2_08448cd5-1dba-4274-ab2b-16d4ac6c0746/registry-server/0.log" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.813270 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-824s2" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.854665 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08448cd5-1dba-4274-ab2b-16d4ac6c0746-catalog-content\") pod \"08448cd5-1dba-4274-ab2b-16d4ac6c0746\" (UID: \"08448cd5-1dba-4274-ab2b-16d4ac6c0746\") " Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.854745 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl42m\" (UniqueName: \"kubernetes.io/projected/08448cd5-1dba-4274-ab2b-16d4ac6c0746-kube-api-access-xl42m\") pod \"08448cd5-1dba-4274-ab2b-16d4ac6c0746\" (UID: \"08448cd5-1dba-4274-ab2b-16d4ac6c0746\") " Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.854773 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08448cd5-1dba-4274-ab2b-16d4ac6c0746-utilities\") pod \"08448cd5-1dba-4274-ab2b-16d4ac6c0746\" (UID: \"08448cd5-1dba-4274-ab2b-16d4ac6c0746\") " Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.855502 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08448cd5-1dba-4274-ab2b-16d4ac6c0746-utilities" (OuterVolumeSpecName: "utilities") pod "08448cd5-1dba-4274-ab2b-16d4ac6c0746" (UID: "08448cd5-1dba-4274-ab2b-16d4ac6c0746"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.868040 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08448cd5-1dba-4274-ab2b-16d4ac6c0746-kube-api-access-xl42m" (OuterVolumeSpecName: "kube-api-access-xl42m") pod "08448cd5-1dba-4274-ab2b-16d4ac6c0746" (UID: "08448cd5-1dba-4274-ab2b-16d4ac6c0746"). InnerVolumeSpecName "kube-api-access-xl42m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.914535 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6zwl9" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.924038 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-md5l9_f19e5981-0356-4c0d-842b-211cfbef65b3/registry-server/0.log" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.925332 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-md5l9" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.960117 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl42m\" (UniqueName: \"kubernetes.io/projected/08448cd5-1dba-4274-ab2b-16d4ac6c0746-kube-api-access-xl42m\") on node \"crc\" DevicePath \"\"" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.979547 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08448cd5-1dba-4274-ab2b-16d4ac6c0746-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:30:58 crc kubenswrapper[4849]: I1209 11:30:58.977189 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-824s2_08448cd5-1dba-4274-ab2b-16d4ac6c0746/registry-server/0.log" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.005032 4849 generic.go:334] "Generic (PLEG): container finished" podID="08448cd5-1dba-4274-ab2b-16d4ac6c0746" containerID="02ce18f65b930de7ec726ea9e7ea09730230689c66c6c78b5ed2477a93e82f69" exitCode=1 Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.005084 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-824s2" event={"ID":"08448cd5-1dba-4274-ab2b-16d4ac6c0746","Type":"ContainerDied","Data":"02ce18f65b930de7ec726ea9e7ea09730230689c66c6c78b5ed2477a93e82f69"} Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.005135 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-824s2" event={"ID":"08448cd5-1dba-4274-ab2b-16d4ac6c0746","Type":"ContainerDied","Data":"a5e424626868777ba0edd51d48208c9dc8951d54a6bae0d172792ff442449f0a"} Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.005153 4849 scope.go:117] "RemoveContainer" containerID="02ce18f65b930de7ec726ea9e7ea09730230689c66c6c78b5ed2477a93e82f69" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.006289 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-824s2" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.022689 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-md5l9_f19e5981-0356-4c0d-842b-211cfbef65b3/registry-server/0.log" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.023355 4849 generic.go:334] "Generic (PLEG): container finished" podID="f19e5981-0356-4c0d-842b-211cfbef65b3" containerID="c831de8f7a68c1871a00af900b4e6899468811cafc7844aef0e2cea3ed545181" exitCode=1 Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.023432 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md5l9" event={"ID":"f19e5981-0356-4c0d-842b-211cfbef65b3","Type":"ContainerDied","Data":"c831de8f7a68c1871a00af900b4e6899468811cafc7844aef0e2cea3ed545181"} Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.023459 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md5l9" event={"ID":"f19e5981-0356-4c0d-842b-211cfbef65b3","Type":"ContainerDied","Data":"8d50e5c56d99f0a418596005c3dcfdf7d5b152958e977cd61801f0f0b50ce773"} Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.023521 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-md5l9" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.025234 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fqwzq" event={"ID":"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a","Type":"ContainerStarted","Data":"d8ef1c854a54ad55a7f15685090b0d71ecf5d175731be60e29755ca1ad531265"} Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.025335 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fqwzq" podUID="5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" containerName="registry-server" containerID="cri-o://d8ef1c854a54ad55a7f15685090b0d71ecf5d175731be60e29755ca1ad531265" gracePeriod=30 Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.027073 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6zwl9_a64f1a61-70ff-4d3d-b033-e65b05414446/registry-server/0.log" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.028362 4849 generic.go:334] "Generic (PLEG): container finished" podID="a64f1a61-70ff-4d3d-b033-e65b05414446" containerID="7d316078fe0bf6fce21884f792abd120830db31532d778754f26faa895181bd7" exitCode=1 Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.028402 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zwl9" event={"ID":"a64f1a61-70ff-4d3d-b033-e65b05414446","Type":"ContainerDied","Data":"7d316078fe0bf6fce21884f792abd120830db31532d778754f26faa895181bd7"} Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.029484 4849 generic.go:334] "Generic (PLEG): container finished" podID="3e7c4a38-1f7c-4cb1-b757-8250869e1597" containerID="5846adfb8ea863f5f0091497887ec3d3c7e646604df8e22c60d5ef8cee398f05" exitCode=0 Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.029650 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lmwmm" podUID="33858531-f998-4cee-b45d-9d5cd8b45f2e" containerName="registry-server" containerID="cri-o://6a81b169eb804253a0e59069ac80c104a16e2dea1a56c58d927bf9a6300985cd" gracePeriod=30 Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.029703 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" event={"ID":"3e7c4a38-1f7c-4cb1-b757-8250869e1597","Type":"ContainerDied","Data":"5846adfb8ea863f5f0091497887ec3d3c7e646604df8e22c60d5ef8cee398f05"} Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.029823 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tlmnz" podUID="2b3a8f6d-222d-4fee-a997-b30bb399b6be" containerName="registry-server" containerID="cri-o://33f3afe09da0d6e353bea37949e502fb91885297a295fd08953ac15d9f514477" gracePeriod=30 Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.030069 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-27bsk" podUID="91c773f3-2b45-488a-9b3c-5c0f2255f5cc" containerName="registry-server" containerID="cri-o://a14609a2f4b778a6854501810fe6db1a043afa41e5a38ea585a67473c7448ae5" gracePeriod=30 Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.063347 4849 scope.go:117] "RemoveContainer" containerID="e86102f27e8d1a9768a70dca93658809349396c2deaee48b25023c2ebd469361" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.080495 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19e5981-0356-4c0d-842b-211cfbef65b3-catalog-content\") pod \"f19e5981-0356-4c0d-842b-211cfbef65b3\" (UID: \"f19e5981-0356-4c0d-842b-211cfbef65b3\") " Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.080604 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5p75\" (UniqueName: \"kubernetes.io/projected/f19e5981-0356-4c0d-842b-211cfbef65b3-kube-api-access-c5p75\") pod \"f19e5981-0356-4c0d-842b-211cfbef65b3\" (UID: \"f19e5981-0356-4c0d-842b-211cfbef65b3\") " Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.080762 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19e5981-0356-4c0d-842b-211cfbef65b3-utilities\") pod \"f19e5981-0356-4c0d-842b-211cfbef65b3\" (UID: \"f19e5981-0356-4c0d-842b-211cfbef65b3\") " Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.082253 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f19e5981-0356-4c0d-842b-211cfbef65b3-utilities" (OuterVolumeSpecName: "utilities") pod "f19e5981-0356-4c0d-842b-211cfbef65b3" (UID: "f19e5981-0356-4c0d-842b-211cfbef65b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.097316 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19e5981-0356-4c0d-842b-211cfbef65b3-kube-api-access-c5p75" (OuterVolumeSpecName: "kube-api-access-c5p75") pod "f19e5981-0356-4c0d-842b-211cfbef65b3" (UID: "f19e5981-0356-4c0d-842b-211cfbef65b3"). InnerVolumeSpecName "kube-api-access-c5p75". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.110088 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fqwzq" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.110186 4849 scope.go:117] "RemoveContainer" containerID="9954e8b2edb647fe88b08b32076617563cc648410652fa25006d2a6e2462c0eb" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.111197 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fqwzq" podStartSLOduration=4.636273302 podStartE2EDuration="1m21.111187552s" podCreationTimestamp="2025-12-09 11:29:38 +0000 UTC" firstStartedPulling="2025-12-09 11:29:40.894908326 +0000 UTC m=+163.434792642" lastFinishedPulling="2025-12-09 11:30:57.369822576 +0000 UTC m=+239.909706892" observedRunningTime="2025-12-09 11:30:59.11071001 +0000 UTC m=+241.650594336" watchObservedRunningTime="2025-12-09 11:30:59.111187552 +0000 UTC m=+241.651071868" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.171697 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08448cd5-1dba-4274-ab2b-16d4ac6c0746-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08448cd5-1dba-4274-ab2b-16d4ac6c0746" (UID: "08448cd5-1dba-4274-ab2b-16d4ac6c0746"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.179880 4849 scope.go:117] "RemoveContainer" containerID="02ce18f65b930de7ec726ea9e7ea09730230689c66c6c78b5ed2477a93e82f69" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.182103 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5p75\" (UniqueName: \"kubernetes.io/projected/f19e5981-0356-4c0d-842b-211cfbef65b3-kube-api-access-c5p75\") on node \"crc\" DevicePath \"\"" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.182147 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08448cd5-1dba-4274-ab2b-16d4ac6c0746-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.182161 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19e5981-0356-4c0d-842b-211cfbef65b3-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:30:59 crc kubenswrapper[4849]: E1209 11:30:59.187111 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02ce18f65b930de7ec726ea9e7ea09730230689c66c6c78b5ed2477a93e82f69\": container with ID starting with 02ce18f65b930de7ec726ea9e7ea09730230689c66c6c78b5ed2477a93e82f69 not found: ID does not exist" containerID="02ce18f65b930de7ec726ea9e7ea09730230689c66c6c78b5ed2477a93e82f69" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.187172 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02ce18f65b930de7ec726ea9e7ea09730230689c66c6c78b5ed2477a93e82f69"} err="failed to get container status \"02ce18f65b930de7ec726ea9e7ea09730230689c66c6c78b5ed2477a93e82f69\": rpc error: code = NotFound desc = could not find container \"02ce18f65b930de7ec726ea9e7ea09730230689c66c6c78b5ed2477a93e82f69\": container with ID starting with 02ce18f65b930de7ec726ea9e7ea09730230689c66c6c78b5ed2477a93e82f69 not found: ID does not exist" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.187208 4849 scope.go:117] "RemoveContainer" containerID="e86102f27e8d1a9768a70dca93658809349396c2deaee48b25023c2ebd469361" Dec 09 11:30:59 crc kubenswrapper[4849]: E1209 11:30:59.195307 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e86102f27e8d1a9768a70dca93658809349396c2deaee48b25023c2ebd469361\": container with ID starting with e86102f27e8d1a9768a70dca93658809349396c2deaee48b25023c2ebd469361 not found: ID does not exist" containerID="e86102f27e8d1a9768a70dca93658809349396c2deaee48b25023c2ebd469361" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.195356 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e86102f27e8d1a9768a70dca93658809349396c2deaee48b25023c2ebd469361"} err="failed to get container status \"e86102f27e8d1a9768a70dca93658809349396c2deaee48b25023c2ebd469361\": rpc error: code = NotFound desc = could not find container \"e86102f27e8d1a9768a70dca93658809349396c2deaee48b25023c2ebd469361\": container with ID starting with e86102f27e8d1a9768a70dca93658809349396c2deaee48b25023c2ebd469361 not found: ID does not exist" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.195385 4849 scope.go:117] "RemoveContainer" containerID="9954e8b2edb647fe88b08b32076617563cc648410652fa25006d2a6e2462c0eb" Dec 09 11:30:59 crc kubenswrapper[4849]: E1209 11:30:59.200399 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9954e8b2edb647fe88b08b32076617563cc648410652fa25006d2a6e2462c0eb\": container with ID starting with 9954e8b2edb647fe88b08b32076617563cc648410652fa25006d2a6e2462c0eb not found: ID does not exist" containerID="9954e8b2edb647fe88b08b32076617563cc648410652fa25006d2a6e2462c0eb" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.200446 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9954e8b2edb647fe88b08b32076617563cc648410652fa25006d2a6e2462c0eb"} err="failed to get container status \"9954e8b2edb647fe88b08b32076617563cc648410652fa25006d2a6e2462c0eb\": rpc error: code = NotFound desc = could not find container \"9954e8b2edb647fe88b08b32076617563cc648410652fa25006d2a6e2462c0eb\": container with ID starting with 9954e8b2edb647fe88b08b32076617563cc648410652fa25006d2a6e2462c0eb not found: ID does not exist" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.200470 4849 scope.go:117] "RemoveContainer" containerID="c831de8f7a68c1871a00af900b4e6899468811cafc7844aef0e2cea3ed545181" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.229847 4849 scope.go:117] "RemoveContainer" containerID="7e4d6fbc0e2c975ca4f17543be394d6a95ba76e279ebb0e399ee32ed3cedb7cc" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.260506 4849 scope.go:117] "RemoveContainer" containerID="7ab7c76ff560ced24080bc23f8308a01eead157b4910dfff1e197fff923aa7d9" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.321022 4849 scope.go:117] "RemoveContainer" containerID="c831de8f7a68c1871a00af900b4e6899468811cafc7844aef0e2cea3ed545181" Dec 09 11:30:59 crc kubenswrapper[4849]: E1209 11:30:59.322228 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c831de8f7a68c1871a00af900b4e6899468811cafc7844aef0e2cea3ed545181\": container with ID starting with c831de8f7a68c1871a00af900b4e6899468811cafc7844aef0e2cea3ed545181 not found: ID does not exist" containerID="c831de8f7a68c1871a00af900b4e6899468811cafc7844aef0e2cea3ed545181" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.322303 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c831de8f7a68c1871a00af900b4e6899468811cafc7844aef0e2cea3ed545181"} err="failed to get container status \"c831de8f7a68c1871a00af900b4e6899468811cafc7844aef0e2cea3ed545181\": rpc error: code = NotFound desc = could not find container \"c831de8f7a68c1871a00af900b4e6899468811cafc7844aef0e2cea3ed545181\": container with ID starting with c831de8f7a68c1871a00af900b4e6899468811cafc7844aef0e2cea3ed545181 not found: ID does not exist" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.322389 4849 scope.go:117] "RemoveContainer" containerID="7e4d6fbc0e2c975ca4f17543be394d6a95ba76e279ebb0e399ee32ed3cedb7cc" Dec 09 11:30:59 crc kubenswrapper[4849]: E1209 11:30:59.323397 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e4d6fbc0e2c975ca4f17543be394d6a95ba76e279ebb0e399ee32ed3cedb7cc\": container with ID starting with 7e4d6fbc0e2c975ca4f17543be394d6a95ba76e279ebb0e399ee32ed3cedb7cc not found: ID does not exist" containerID="7e4d6fbc0e2c975ca4f17543be394d6a95ba76e279ebb0e399ee32ed3cedb7cc" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.323461 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e4d6fbc0e2c975ca4f17543be394d6a95ba76e279ebb0e399ee32ed3cedb7cc"} err="failed to get container status \"7e4d6fbc0e2c975ca4f17543be394d6a95ba76e279ebb0e399ee32ed3cedb7cc\": rpc error: code = NotFound desc = could not find container \"7e4d6fbc0e2c975ca4f17543be394d6a95ba76e279ebb0e399ee32ed3cedb7cc\": container with ID starting with 7e4d6fbc0e2c975ca4f17543be394d6a95ba76e279ebb0e399ee32ed3cedb7cc not found: ID does not exist" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.323489 4849 scope.go:117] "RemoveContainer" containerID="7ab7c76ff560ced24080bc23f8308a01eead157b4910dfff1e197fff923aa7d9" Dec 09 11:30:59 crc kubenswrapper[4849]: E1209 11:30:59.323779 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ab7c76ff560ced24080bc23f8308a01eead157b4910dfff1e197fff923aa7d9\": container with ID starting with 7ab7c76ff560ced24080bc23f8308a01eead157b4910dfff1e197fff923aa7d9 not found: ID does not exist" containerID="7ab7c76ff560ced24080bc23f8308a01eead157b4910dfff1e197fff923aa7d9" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.323804 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ab7c76ff560ced24080bc23f8308a01eead157b4910dfff1e197fff923aa7d9"} err="failed to get container status \"7ab7c76ff560ced24080bc23f8308a01eead157b4910dfff1e197fff923aa7d9\": rpc error: code = NotFound desc = could not find container \"7ab7c76ff560ced24080bc23f8308a01eead157b4910dfff1e197fff923aa7d9\": container with ID starting with 7ab7c76ff560ced24080bc23f8308a01eead157b4910dfff1e197fff923aa7d9 not found: ID does not exist" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.361059 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-824s2"] Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.365607 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-824s2"] Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.477553 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpwrl"] Dec 09 11:30:59 crc kubenswrapper[4849]: W1209 11:30:59.529617 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eb652dc_111b_4544_a20e_0c98d451825d.slice/crio-1abf88bed8c1e77644d8b221f99694288d0d8e6a7e451c0d70a0469b4d7bd64b WatchSource:0}: Error finding container 1abf88bed8c1e77644d8b221f99694288d0d8e6a7e451c0d70a0469b4d7bd64b: Status 404 returned error can't find the container with id 1abf88bed8c1e77644d8b221f99694288d0d8e6a7e451c0d70a0469b4d7bd64b Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.719991 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6zwl9_a64f1a61-70ff-4d3d-b033-e65b05414446/registry-server/0.log" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.720956 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6zwl9" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.752342 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.807150 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfdd2\" (UniqueName: \"kubernetes.io/projected/3e7c4a38-1f7c-4cb1-b757-8250869e1597-kube-api-access-zfdd2\") pod \"3e7c4a38-1f7c-4cb1-b757-8250869e1597\" (UID: \"3e7c4a38-1f7c-4cb1-b757-8250869e1597\") " Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.807209 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a64f1a61-70ff-4d3d-b033-e65b05414446-utilities\") pod \"a64f1a61-70ff-4d3d-b033-e65b05414446\" (UID: \"a64f1a61-70ff-4d3d-b033-e65b05414446\") " Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.807279 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a64f1a61-70ff-4d3d-b033-e65b05414446-catalog-content\") pod \"a64f1a61-70ff-4d3d-b033-e65b05414446\" (UID: \"a64f1a61-70ff-4d3d-b033-e65b05414446\") " Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.807305 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3e7c4a38-1f7c-4cb1-b757-8250869e1597-marketplace-trusted-ca\") pod \"3e7c4a38-1f7c-4cb1-b757-8250869e1597\" (UID: \"3e7c4a38-1f7c-4cb1-b757-8250869e1597\") " Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.807375 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3e7c4a38-1f7c-4cb1-b757-8250869e1597-marketplace-operator-metrics\") pod \"3e7c4a38-1f7c-4cb1-b757-8250869e1597\" (UID: \"3e7c4a38-1f7c-4cb1-b757-8250869e1597\") " Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.807481 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48chx\" (UniqueName: \"kubernetes.io/projected/a64f1a61-70ff-4d3d-b033-e65b05414446-kube-api-access-48chx\") pod \"a64f1a61-70ff-4d3d-b033-e65b05414446\" (UID: \"a64f1a61-70ff-4d3d-b033-e65b05414446\") " Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.808648 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64f1a61-70ff-4d3d-b033-e65b05414446-utilities" (OuterVolumeSpecName: "utilities") pod "a64f1a61-70ff-4d3d-b033-e65b05414446" (UID: "a64f1a61-70ff-4d3d-b033-e65b05414446"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.808669 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e7c4a38-1f7c-4cb1-b757-8250869e1597-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3e7c4a38-1f7c-4cb1-b757-8250869e1597" (UID: "3e7c4a38-1f7c-4cb1-b757-8250869e1597"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.812932 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e7c4a38-1f7c-4cb1-b757-8250869e1597-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3e7c4a38-1f7c-4cb1-b757-8250869e1597" (UID: "3e7c4a38-1f7c-4cb1-b757-8250869e1597"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.813266 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a64f1a61-70ff-4d3d-b033-e65b05414446-kube-api-access-48chx" (OuterVolumeSpecName: "kube-api-access-48chx") pod "a64f1a61-70ff-4d3d-b033-e65b05414446" (UID: "a64f1a61-70ff-4d3d-b033-e65b05414446"). InnerVolumeSpecName "kube-api-access-48chx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.813322 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e7c4a38-1f7c-4cb1-b757-8250869e1597-kube-api-access-zfdd2" (OuterVolumeSpecName: "kube-api-access-zfdd2") pod "3e7c4a38-1f7c-4cb1-b757-8250869e1597" (UID: "3e7c4a38-1f7c-4cb1-b757-8250869e1597"). InnerVolumeSpecName "kube-api-access-zfdd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.909029 4849 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3e7c4a38-1f7c-4cb1-b757-8250869e1597-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.909064 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48chx\" (UniqueName: \"kubernetes.io/projected/a64f1a61-70ff-4d3d-b033-e65b05414446-kube-api-access-48chx\") on node \"crc\" DevicePath \"\"" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.909073 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfdd2\" (UniqueName: \"kubernetes.io/projected/3e7c4a38-1f7c-4cb1-b757-8250869e1597-kube-api-access-zfdd2\") on node \"crc\" DevicePath \"\"" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.909082 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a64f1a61-70ff-4d3d-b033-e65b05414446-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.909093 4849 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3e7c4a38-1f7c-4cb1-b757-8250869e1597-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:30:59 crc kubenswrapper[4849]: I1209 11:30:59.928603 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64f1a61-70ff-4d3d-b033-e65b05414446-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a64f1a61-70ff-4d3d-b033-e65b05414446" (UID: "a64f1a61-70ff-4d3d-b033-e65b05414446"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.009875 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a64f1a61-70ff-4d3d-b033-e65b05414446-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.038168 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.038165 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xksm" event={"ID":"3e7c4a38-1f7c-4cb1-b757-8250869e1597","Type":"ContainerDied","Data":"3224b25131510493ca2b62f4a0e5aac70d113ee01c7b37e7a5fd88126840037d"} Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.038357 4849 scope.go:117] "RemoveContainer" containerID="5846adfb8ea863f5f0091497887ec3d3c7e646604df8e22c60d5ef8cee398f05" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.039346 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" event={"ID":"6eb652dc-111b-4544-a20e-0c98d451825d","Type":"ContainerStarted","Data":"1abf88bed8c1e77644d8b221f99694288d0d8e6a7e451c0d70a0469b4d7bd64b"} Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.046639 4849 generic.go:334] "Generic (PLEG): container finished" podID="2b6edbbd-c246-4696-ac45-efb4c27bbd1b" containerID="90ff3a9e9ac2a8e40f279e0eb5888b10b142f567e3fe7768e0f235c0ff570735" exitCode=0 Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.046712 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhd9l" event={"ID":"2b6edbbd-c246-4696-ac45-efb4c27bbd1b","Type":"ContainerDied","Data":"90ff3a9e9ac2a8e40f279e0eb5888b10b142f567e3fe7768e0f235c0ff570735"} Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.055153 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fqwzq_5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a/registry-server/0.log" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.062643 4849 generic.go:334] "Generic (PLEG): container finished" podID="5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" containerID="d8ef1c854a54ad55a7f15685090b0d71ecf5d175731be60e29755ca1ad531265" exitCode=1 Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.062750 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fqwzq" event={"ID":"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a","Type":"ContainerDied","Data":"d8ef1c854a54ad55a7f15685090b0d71ecf5d175731be60e29755ca1ad531265"} Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.066654 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tlmnz_2b3a8f6d-222d-4fee-a997-b30bb399b6be/registry-server/0.log" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.067345 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7xksm"] Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.067570 4849 generic.go:334] "Generic (PLEG): container finished" podID="2b3a8f6d-222d-4fee-a997-b30bb399b6be" containerID="33f3afe09da0d6e353bea37949e502fb91885297a295fd08953ac15d9f514477" exitCode=1 Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.067656 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlmnz" event={"ID":"2b3a8f6d-222d-4fee-a997-b30bb399b6be","Type":"ContainerDied","Data":"33f3afe09da0d6e353bea37949e502fb91885297a295fd08953ac15d9f514477"} Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.068760 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6zwl9_a64f1a61-70ff-4d3d-b033-e65b05414446/registry-server/0.log" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.069494 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6zwl9" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.069502 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zwl9" event={"ID":"a64f1a61-70ff-4d3d-b033-e65b05414446","Type":"ContainerDied","Data":"9bb5ea59c700947931bf597dcb72f6fd7316b5d3c3d2e4cbf930efdad5ee34ff"} Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.069571 4849 scope.go:117] "RemoveContainer" containerID="7d316078fe0bf6fce21884f792abd120830db31532d778754f26faa895181bd7" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.070231 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7xksm"] Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.084626 4849 scope.go:117] "RemoveContainer" containerID="6e50aac9eeb28ae7d04d818059943b1d05d9838a2154fbb310808048ebbc54a5" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.106266 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6zwl9"] Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.109501 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6zwl9"] Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.127273 4849 scope.go:117] "RemoveContainer" containerID="cbe41a0cef098434b13b307618f3dd87f0bdfbe24ca84f624b3edbe3fc9ca650" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.544603 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08448cd5-1dba-4274-ab2b-16d4ac6c0746" path="/var/lib/kubelet/pods/08448cd5-1dba-4274-ab2b-16d4ac6c0746/volumes" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.545740 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e7c4a38-1f7c-4cb1-b757-8250869e1597" path="/var/lib/kubelet/pods/3e7c4a38-1f7c-4cb1-b757-8250869e1597/volumes" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.546519 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a64f1a61-70ff-4d3d-b033-e65b05414446" path="/var/lib/kubelet/pods/a64f1a61-70ff-4d3d-b033-e65b05414446/volumes" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.624826 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9rq2m"] Dec 09 11:31:00 crc kubenswrapper[4849]: E1209 11:31:00.625132 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64f1a61-70ff-4d3d-b033-e65b05414446" containerName="registry-server" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.625154 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64f1a61-70ff-4d3d-b033-e65b05414446" containerName="registry-server" Dec 09 11:31:00 crc kubenswrapper[4849]: E1209 11:31:00.625162 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19e5981-0356-4c0d-842b-211cfbef65b3" containerName="extract-utilities" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.625170 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19e5981-0356-4c0d-842b-211cfbef65b3" containerName="extract-utilities" Dec 09 11:31:00 crc kubenswrapper[4849]: E1209 11:31:00.625186 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19e5981-0356-4c0d-842b-211cfbef65b3" containerName="registry-server" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.625195 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19e5981-0356-4c0d-842b-211cfbef65b3" containerName="registry-server" Dec 09 11:31:00 crc kubenswrapper[4849]: E1209 11:31:00.625205 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64f1a61-70ff-4d3d-b033-e65b05414446" containerName="extract-content" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.625213 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64f1a61-70ff-4d3d-b033-e65b05414446" containerName="extract-content" Dec 09 11:31:00 crc kubenswrapper[4849]: E1209 11:31:00.625223 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64f1a61-70ff-4d3d-b033-e65b05414446" containerName="extract-utilities" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.625230 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64f1a61-70ff-4d3d-b033-e65b05414446" containerName="extract-utilities" Dec 09 11:31:00 crc kubenswrapper[4849]: E1209 11:31:00.625240 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08448cd5-1dba-4274-ab2b-16d4ac6c0746" containerName="extract-content" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.625280 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="08448cd5-1dba-4274-ab2b-16d4ac6c0746" containerName="extract-content" Dec 09 11:31:00 crc kubenswrapper[4849]: E1209 11:31:00.625295 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08448cd5-1dba-4274-ab2b-16d4ac6c0746" containerName="extract-utilities" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.625303 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="08448cd5-1dba-4274-ab2b-16d4ac6c0746" containerName="extract-utilities" Dec 09 11:31:00 crc kubenswrapper[4849]: E1209 11:31:00.625317 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19e5981-0356-4c0d-842b-211cfbef65b3" containerName="extract-content" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.625326 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19e5981-0356-4c0d-842b-211cfbef65b3" containerName="extract-content" Dec 09 11:31:00 crc kubenswrapper[4849]: E1209 11:31:00.625359 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e7c4a38-1f7c-4cb1-b757-8250869e1597" containerName="marketplace-operator" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.625369 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e7c4a38-1f7c-4cb1-b757-8250869e1597" containerName="marketplace-operator" Dec 09 11:31:00 crc kubenswrapper[4849]: E1209 11:31:00.625380 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08448cd5-1dba-4274-ab2b-16d4ac6c0746" containerName="registry-server" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.625386 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="08448cd5-1dba-4274-ab2b-16d4ac6c0746" containerName="registry-server" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.625637 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19e5981-0356-4c0d-842b-211cfbef65b3" containerName="registry-server" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.625686 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e7c4a38-1f7c-4cb1-b757-8250869e1597" containerName="marketplace-operator" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.625700 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="08448cd5-1dba-4274-ab2b-16d4ac6c0746" containerName="registry-server" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.625711 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64f1a61-70ff-4d3d-b033-e65b05414446" containerName="registry-server" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.626938 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9rq2m" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.629306 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9rq2m"] Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.721688 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dgjv\" (UniqueName: \"kubernetes.io/projected/591a8321-876b-43fc-a46e-9e632c31e6ad-kube-api-access-6dgjv\") pod \"redhat-operators-9rq2m\" (UID: \"591a8321-876b-43fc-a46e-9e632c31e6ad\") " pod="openshift-marketplace/redhat-operators-9rq2m" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.721775 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/591a8321-876b-43fc-a46e-9e632c31e6ad-catalog-content\") pod \"redhat-operators-9rq2m\" (UID: \"591a8321-876b-43fc-a46e-9e632c31e6ad\") " pod="openshift-marketplace/redhat-operators-9rq2m" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.722036 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/591a8321-876b-43fc-a46e-9e632c31e6ad-utilities\") pod \"redhat-operators-9rq2m\" (UID: \"591a8321-876b-43fc-a46e-9e632c31e6ad\") " pod="openshift-marketplace/redhat-operators-9rq2m" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.824977 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dgjv\" (UniqueName: \"kubernetes.io/projected/591a8321-876b-43fc-a46e-9e632c31e6ad-kube-api-access-6dgjv\") pod \"redhat-operators-9rq2m\" (UID: \"591a8321-876b-43fc-a46e-9e632c31e6ad\") " pod="openshift-marketplace/redhat-operators-9rq2m" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.825037 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/591a8321-876b-43fc-a46e-9e632c31e6ad-catalog-content\") pod \"redhat-operators-9rq2m\" (UID: \"591a8321-876b-43fc-a46e-9e632c31e6ad\") " pod="openshift-marketplace/redhat-operators-9rq2m" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.825108 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/591a8321-876b-43fc-a46e-9e632c31e6ad-utilities\") pod \"redhat-operators-9rq2m\" (UID: \"591a8321-876b-43fc-a46e-9e632c31e6ad\") " pod="openshift-marketplace/redhat-operators-9rq2m" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.825842 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/591a8321-876b-43fc-a46e-9e632c31e6ad-utilities\") pod \"redhat-operators-9rq2m\" (UID: \"591a8321-876b-43fc-a46e-9e632c31e6ad\") " pod="openshift-marketplace/redhat-operators-9rq2m" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.825886 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/591a8321-876b-43fc-a46e-9e632c31e6ad-catalog-content\") pod \"redhat-operators-9rq2m\" (UID: \"591a8321-876b-43fc-a46e-9e632c31e6ad\") " pod="openshift-marketplace/redhat-operators-9rq2m" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.844921 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dgjv\" (UniqueName: \"kubernetes.io/projected/591a8321-876b-43fc-a46e-9e632c31e6ad-kube-api-access-6dgjv\") pod \"redhat-operators-9rq2m\" (UID: \"591a8321-876b-43fc-a46e-9e632c31e6ad\") " pod="openshift-marketplace/redhat-operators-9rq2m" Dec 09 11:31:00 crc kubenswrapper[4849]: I1209 11:31:00.960218 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9rq2m" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.172857 4849 generic.go:334] "Generic (PLEG): container finished" podID="91c773f3-2b45-488a-9b3c-5c0f2255f5cc" containerID="a14609a2f4b778a6854501810fe6db1a043afa41e5a38ea585a67473c7448ae5" exitCode=0 Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.172989 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27bsk" event={"ID":"91c773f3-2b45-488a-9b3c-5c0f2255f5cc","Type":"ContainerDied","Data":"a14609a2f4b778a6854501810fe6db1a043afa41e5a38ea585a67473c7448ae5"} Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.176934 4849 generic.go:334] "Generic (PLEG): container finished" podID="33858531-f998-4cee-b45d-9d5cd8b45f2e" containerID="6a81b169eb804253a0e59069ac80c104a16e2dea1a56c58d927bf9a6300985cd" exitCode=0 Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.176997 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmwmm" event={"ID":"33858531-f998-4cee-b45d-9d5cd8b45f2e","Type":"ContainerDied","Data":"6a81b169eb804253a0e59069ac80c104a16e2dea1a56c58d927bf9a6300985cd"} Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.519656 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9rq2m"] Dec 09 11:31:01 crc kubenswrapper[4849]: W1209 11:31:01.533812 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod591a8321_876b_43fc_a46e_9e632c31e6ad.slice/crio-4b0809ae06a19d2034fb0e998f2369eef0f5c26249a5663dc2a2d5fdd340493b WatchSource:0}: Error finding container 4b0809ae06a19d2034fb0e998f2369eef0f5c26249a5663dc2a2d5fdd340493b: Status 404 returned error can't find the container with id 4b0809ae06a19d2034fb0e998f2369eef0f5c26249a5663dc2a2d5fdd340493b Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.559805 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tlmnz_2b3a8f6d-222d-4fee-a997-b30bb399b6be/registry-server/0.log" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.561520 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlmnz" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.571062 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhd9l" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.668691 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-utilities\") pod \"2b6edbbd-c246-4696-ac45-efb4c27bbd1b\" (UID: \"2b6edbbd-c246-4696-ac45-efb4c27bbd1b\") " Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.668790 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b3a8f6d-222d-4fee-a997-b30bb399b6be-utilities\") pod \"2b3a8f6d-222d-4fee-a997-b30bb399b6be\" (UID: \"2b3a8f6d-222d-4fee-a997-b30bb399b6be\") " Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.668898 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kktf\" (UniqueName: \"kubernetes.io/projected/2b3a8f6d-222d-4fee-a997-b30bb399b6be-kube-api-access-5kktf\") pod \"2b3a8f6d-222d-4fee-a997-b30bb399b6be\" (UID: \"2b3a8f6d-222d-4fee-a997-b30bb399b6be\") " Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.668922 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mcp6\" (UniqueName: \"kubernetes.io/projected/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-kube-api-access-6mcp6\") pod \"2b6edbbd-c246-4696-ac45-efb4c27bbd1b\" (UID: \"2b6edbbd-c246-4696-ac45-efb4c27bbd1b\") " Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.668945 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-catalog-content\") pod \"2b6edbbd-c246-4696-ac45-efb4c27bbd1b\" (UID: \"2b6edbbd-c246-4696-ac45-efb4c27bbd1b\") " Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.668976 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b3a8f6d-222d-4fee-a997-b30bb399b6be-catalog-content\") pod \"2b3a8f6d-222d-4fee-a997-b30bb399b6be\" (UID: \"2b3a8f6d-222d-4fee-a997-b30bb399b6be\") " Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.676997 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-utilities" (OuterVolumeSpecName: "utilities") pod "2b6edbbd-c246-4696-ac45-efb4c27bbd1b" (UID: "2b6edbbd-c246-4696-ac45-efb4c27bbd1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.679191 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b3a8f6d-222d-4fee-a997-b30bb399b6be-utilities" (OuterVolumeSpecName: "utilities") pod "2b3a8f6d-222d-4fee-a997-b30bb399b6be" (UID: "2b3a8f6d-222d-4fee-a997-b30bb399b6be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.684761 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-kube-api-access-6mcp6" (OuterVolumeSpecName: "kube-api-access-6mcp6") pod "2b6edbbd-c246-4696-ac45-efb4c27bbd1b" (UID: "2b6edbbd-c246-4696-ac45-efb4c27bbd1b"). InnerVolumeSpecName "kube-api-access-6mcp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.692387 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3a8f6d-222d-4fee-a997-b30bb399b6be-kube-api-access-5kktf" (OuterVolumeSpecName: "kube-api-access-5kktf") pod "2b3a8f6d-222d-4fee-a997-b30bb399b6be" (UID: "2b3a8f6d-222d-4fee-a997-b30bb399b6be"). InnerVolumeSpecName "kube-api-access-5kktf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.736325 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fqwzq_5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a/registry-server/0.log" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.743990 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fqwzq" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.771111 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b6edbbd-c246-4696-ac45-efb4c27bbd1b" (UID: "2b6edbbd-c246-4696-ac45-efb4c27bbd1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.777266 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9wr4\" (UniqueName: \"kubernetes.io/projected/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-kube-api-access-w9wr4\") pod \"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a\" (UID: \"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a\") " Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.777329 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-catalog-content\") pod \"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a\" (UID: \"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a\") " Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.777394 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-utilities\") pod \"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a\" (UID: \"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a\") " Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.777699 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kktf\" (UniqueName: \"kubernetes.io/projected/2b3a8f6d-222d-4fee-a997-b30bb399b6be-kube-api-access-5kktf\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.777718 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mcp6\" (UniqueName: \"kubernetes.io/projected/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-kube-api-access-6mcp6\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.777730 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.777775 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b6edbbd-c246-4696-ac45-efb4c27bbd1b-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.777789 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b3a8f6d-222d-4fee-a997-b30bb399b6be-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.778259 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-utilities" (OuterVolumeSpecName: "utilities") pod "5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" (UID: "5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.782308 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-kube-api-access-w9wr4" (OuterVolumeSpecName: "kube-api-access-w9wr4") pod "5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" (UID: "5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a"). InnerVolumeSpecName "kube-api-access-w9wr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.861056 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27bsk" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.878623 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-utilities\") pod \"91c773f3-2b45-488a-9b3c-5c0f2255f5cc\" (UID: \"91c773f3-2b45-488a-9b3c-5c0f2255f5cc\") " Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.878705 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d99j4\" (UniqueName: \"kubernetes.io/projected/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-kube-api-access-d99j4\") pod \"91c773f3-2b45-488a-9b3c-5c0f2255f5cc\" (UID: \"91c773f3-2b45-488a-9b3c-5c0f2255f5cc\") " Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.878838 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-catalog-content\") pod \"91c773f3-2b45-488a-9b3c-5c0f2255f5cc\" (UID: \"91c773f3-2b45-488a-9b3c-5c0f2255f5cc\") " Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.879121 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.879146 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9wr4\" (UniqueName: \"kubernetes.io/projected/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-kube-api-access-w9wr4\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.880918 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-utilities" (OuterVolumeSpecName: "utilities") pod "91c773f3-2b45-488a-9b3c-5c0f2255f5cc" (UID: "91c773f3-2b45-488a-9b3c-5c0f2255f5cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.885668 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-kube-api-access-d99j4" (OuterVolumeSpecName: "kube-api-access-d99j4") pod "91c773f3-2b45-488a-9b3c-5c0f2255f5cc" (UID: "91c773f3-2b45-488a-9b3c-5c0f2255f5cc"). InnerVolumeSpecName "kube-api-access-d99j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.899227 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91c773f3-2b45-488a-9b3c-5c0f2255f5cc" (UID: "91c773f3-2b45-488a-9b3c-5c0f2255f5cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.980034 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.980063 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:01 crc kubenswrapper[4849]: I1209 11:31:01.980074 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d99j4\" (UniqueName: \"kubernetes.io/projected/91c773f3-2b45-488a-9b3c-5c0f2255f5cc-kube-api-access-d99j4\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.153145 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmwmm" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.168700 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" (UID: "5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.182600 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33858531-f998-4cee-b45d-9d5cd8b45f2e-catalog-content\") pod \"33858531-f998-4cee-b45d-9d5cd8b45f2e\" (UID: \"33858531-f998-4cee-b45d-9d5cd8b45f2e\") " Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.182675 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2477v\" (UniqueName: \"kubernetes.io/projected/33858531-f998-4cee-b45d-9d5cd8b45f2e-kube-api-access-2477v\") pod \"33858531-f998-4cee-b45d-9d5cd8b45f2e\" (UID: \"33858531-f998-4cee-b45d-9d5cd8b45f2e\") " Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.182738 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33858531-f998-4cee-b45d-9d5cd8b45f2e-utilities\") pod \"33858531-f998-4cee-b45d-9d5cd8b45f2e\" (UID: \"33858531-f998-4cee-b45d-9d5cd8b45f2e\") " Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.183585 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33858531-f998-4cee-b45d-9d5cd8b45f2e-utilities" (OuterVolumeSpecName: "utilities") pod "33858531-f998-4cee-b45d-9d5cd8b45f2e" (UID: "33858531-f998-4cee-b45d-9d5cd8b45f2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.183990 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.184012 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33858531-f998-4cee-b45d-9d5cd8b45f2e-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.185261 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33858531-f998-4cee-b45d-9d5cd8b45f2e-kube-api-access-2477v" (OuterVolumeSpecName: "kube-api-access-2477v") pod "33858531-f998-4cee-b45d-9d5cd8b45f2e" (UID: "33858531-f998-4cee-b45d-9d5cd8b45f2e"). InnerVolumeSpecName "kube-api-access-2477v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.190509 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" event={"ID":"6eb652dc-111b-4544-a20e-0c98d451825d","Type":"ContainerStarted","Data":"b6b96215d3e28521709a5a1e107689ee81aa6483190ac9b85f709e80835ba3c8"} Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.190887 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.192340 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhd9l" event={"ID":"2b6edbbd-c246-4696-ac45-efb4c27bbd1b","Type":"ContainerDied","Data":"9f7ff2e5195d0f94021a176371509186c74d952553fcd03dd1b8f67953c72d5b"} Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.192380 4849 scope.go:117] "RemoveContainer" containerID="90ff3a9e9ac2a8e40f279e0eb5888b10b142f567e3fe7768e0f235c0ff570735" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.192495 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhd9l" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.194684 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.195469 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fqwzq_5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a/registry-server/0.log" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.200174 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fqwzq" event={"ID":"5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a","Type":"ContainerDied","Data":"9c9d9748cd168675a3eb4bd8bfaa84701ff3d395b2435c19fe8c482cc42c4315"} Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.200283 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fqwzq" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.208203 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rq2m" event={"ID":"591a8321-876b-43fc-a46e-9e632c31e6ad","Type":"ContainerStarted","Data":"4b0809ae06a19d2034fb0e998f2369eef0f5c26249a5663dc2a2d5fdd340493b"} Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.212583 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tlmnz_2b3a8f6d-222d-4fee-a997-b30bb399b6be/registry-server/0.log" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.216383 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlmnz" event={"ID":"2b3a8f6d-222d-4fee-a997-b30bb399b6be","Type":"ContainerDied","Data":"3495251836b380e412e438e913527c5c60d8068561a05f34edd54e04ac36270f"} Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.216548 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlmnz" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.223283 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fpwrl" podStartSLOduration=4.2232644520000004 podStartE2EDuration="4.223264452s" podCreationTimestamp="2025-12-09 11:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:31:02.222902792 +0000 UTC m=+244.762787128" watchObservedRunningTime="2025-12-09 11:31:02.223264452 +0000 UTC m=+244.763148778" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.224239 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27bsk" event={"ID":"91c773f3-2b45-488a-9b3c-5c0f2255f5cc","Type":"ContainerDied","Data":"bc5901a0d0fc6df3fd95672031396ca22ab3c1555251aefb9934eb52eefc3a9a"} Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.224344 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27bsk" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.236210 4849 scope.go:117] "RemoveContainer" containerID="1c26c3ec6f7ae25a85e4ad15bcde058d561115b3a78d5ed03e81d26b86f6bc8a" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.236239 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmwmm" event={"ID":"33858531-f998-4cee-b45d-9d5cd8b45f2e","Type":"ContainerDied","Data":"f519c9c4bffbdfedb0fab9e1ea29f3a0705756fae93f2a6d517e5df14ec33254"} Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.236339 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmwmm" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.248343 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33858531-f998-4cee-b45d-9d5cd8b45f2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33858531-f998-4cee-b45d-9d5cd8b45f2e" (UID: "33858531-f998-4cee-b45d-9d5cd8b45f2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.267313 4849 scope.go:117] "RemoveContainer" containerID="0ce1b2c9a4fff65333816008444b0147b6e9292b2444f4be733c865061e21f95" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.286963 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2477v\" (UniqueName: \"kubernetes.io/projected/33858531-f998-4cee-b45d-9d5cd8b45f2e-kube-api-access-2477v\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.287010 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33858531-f998-4cee-b45d-9d5cd8b45f2e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.290443 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhd9l"] Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.304355 4849 scope.go:117] "RemoveContainer" containerID="d8ef1c854a54ad55a7f15685090b0d71ecf5d175731be60e29755ca1ad531265" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.313179 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dhd9l"] Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.344574 4849 scope.go:117] "RemoveContainer" containerID="e4935c2a90235ce9380dea8a6c57d898e6717eba62cba4a1dc9baec74aba7f21" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.349519 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-27bsk"] Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.364724 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-27bsk"] Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.369791 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fqwzq"] Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.372569 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fqwzq"] Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.373720 4849 scope.go:117] "RemoveContainer" containerID="e04c0a0a256bb424ba20b490087758859193122db6c575ac23f1781011eaff99" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.395245 4849 scope.go:117] "RemoveContainer" containerID="33f3afe09da0d6e353bea37949e502fb91885297a295fd08953ac15d9f514477" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.408506 4849 scope.go:117] "RemoveContainer" containerID="5b4c546614f3c29ec92d8409502a6478a8f355ba08aac7f225f4b7189f201dfd" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.423532 4849 scope.go:117] "RemoveContainer" containerID="bae6a375484a3a989a8e55d5a3d287eed9b93f9e1aa314e9410e6ab270cb8a05" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.435391 4849 scope.go:117] "RemoveContainer" containerID="a14609a2f4b778a6854501810fe6db1a043afa41e5a38ea585a67473c7448ae5" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.513767 4849 scope.go:117] "RemoveContainer" containerID="03008738697b56b1b964951c82479ea553f7704b8da4ac1ffc5ace100a3b5d68" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.530006 4849 scope.go:117] "RemoveContainer" containerID="b2298c8d0dadc4a075f43fce30c6eeceacae0294beb7193336ddec2611abe1a2" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.531108 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f19e5981-0356-4c0d-842b-211cfbef65b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f19e5981-0356-4c0d-842b-211cfbef65b3" (UID: "f19e5981-0356-4c0d-842b-211cfbef65b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.542963 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b6edbbd-c246-4696-ac45-efb4c27bbd1b" path="/var/lib/kubelet/pods/2b6edbbd-c246-4696-ac45-efb4c27bbd1b/volumes" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.544004 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" path="/var/lib/kubelet/pods/5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a/volumes" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.545160 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91c773f3-2b45-488a-9b3c-5c0f2255f5cc" path="/var/lib/kubelet/pods/91c773f3-2b45-488a-9b3c-5c0f2255f5cc/volumes" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.548291 4849 scope.go:117] "RemoveContainer" containerID="6a81b169eb804253a0e59069ac80c104a16e2dea1a56c58d927bf9a6300985cd" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.591943 4849 scope.go:117] "RemoveContainer" containerID="c05f480ebab02084603329af26bd4daec7e28b57546a88d16cd4aad2a735c895" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.592225 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19e5981-0356-4c0d-842b-211cfbef65b3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.597847 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmwmm"] Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.601510 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmwmm"] Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.610954 4849 scope.go:117] "RemoveContainer" containerID="6d1b01738ae9a7dd4a5262b3be62dced38ee844dfd3586be22a38babe78c6232" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.667394 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-md5l9"] Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.670106 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-md5l9"] Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.789059 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b3a8f6d-222d-4fee-a997-b30bb399b6be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b3a8f6d-222d-4fee-a997-b30bb399b6be" (UID: "2b3a8f6d-222d-4fee-a997-b30bb399b6be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.794154 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b3a8f6d-222d-4fee-a997-b30bb399b6be-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.850286 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tlmnz"] Dec 09 11:31:02 crc kubenswrapper[4849]: I1209 11:31:02.854289 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tlmnz"] Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.019894 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dhpb4"] Dec 09 11:31:03 crc kubenswrapper[4849]: E1209 11:31:03.020399 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6edbbd-c246-4696-ac45-efb4c27bbd1b" containerName="registry-server" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.020454 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6edbbd-c246-4696-ac45-efb4c27bbd1b" containerName="registry-server" Dec 09 11:31:03 crc kubenswrapper[4849]: E1209 11:31:03.020466 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c773f3-2b45-488a-9b3c-5c0f2255f5cc" containerName="extract-utilities" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.020474 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c773f3-2b45-488a-9b3c-5c0f2255f5cc" containerName="extract-utilities" Dec 09 11:31:03 crc kubenswrapper[4849]: E1209 11:31:03.020488 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33858531-f998-4cee-b45d-9d5cd8b45f2e" containerName="extract-content" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.020529 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="33858531-f998-4cee-b45d-9d5cd8b45f2e" containerName="extract-content" Dec 09 11:31:03 crc kubenswrapper[4849]: E1209 11:31:03.020541 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c773f3-2b45-488a-9b3c-5c0f2255f5cc" containerName="extract-content" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.020547 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c773f3-2b45-488a-9b3c-5c0f2255f5cc" containerName="extract-content" Dec 09 11:31:03 crc kubenswrapper[4849]: E1209 11:31:03.020555 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" containerName="extract-utilities" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.020562 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" containerName="extract-utilities" Dec 09 11:31:03 crc kubenswrapper[4849]: E1209 11:31:03.020570 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6edbbd-c246-4696-ac45-efb4c27bbd1b" containerName="extract-content" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.020604 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6edbbd-c246-4696-ac45-efb4c27bbd1b" containerName="extract-content" Dec 09 11:31:03 crc kubenswrapper[4849]: E1209 11:31:03.020614 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33858531-f998-4cee-b45d-9d5cd8b45f2e" containerName="registry-server" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.020691 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="33858531-f998-4cee-b45d-9d5cd8b45f2e" containerName="registry-server" Dec 09 11:31:03 crc kubenswrapper[4849]: E1209 11:31:03.020705 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3a8f6d-222d-4fee-a997-b30bb399b6be" containerName="extract-content" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.020711 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3a8f6d-222d-4fee-a997-b30bb399b6be" containerName="extract-content" Dec 09 11:31:03 crc kubenswrapper[4849]: E1209 11:31:03.020718 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c773f3-2b45-488a-9b3c-5c0f2255f5cc" containerName="registry-server" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.020724 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c773f3-2b45-488a-9b3c-5c0f2255f5cc" containerName="registry-server" Dec 09 11:31:03 crc kubenswrapper[4849]: E1209 11:31:03.021357 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3a8f6d-222d-4fee-a997-b30bb399b6be" containerName="registry-server" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.021445 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3a8f6d-222d-4fee-a997-b30bb399b6be" containerName="registry-server" Dec 09 11:31:03 crc kubenswrapper[4849]: E1209 11:31:03.021476 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3a8f6d-222d-4fee-a997-b30bb399b6be" containerName="extract-utilities" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.021489 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3a8f6d-222d-4fee-a997-b30bb399b6be" containerName="extract-utilities" Dec 09 11:31:03 crc kubenswrapper[4849]: E1209 11:31:03.021522 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33858531-f998-4cee-b45d-9d5cd8b45f2e" containerName="extract-utilities" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.021541 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="33858531-f998-4cee-b45d-9d5cd8b45f2e" containerName="extract-utilities" Dec 09 11:31:03 crc kubenswrapper[4849]: E1209 11:31:03.021567 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" containerName="extract-content" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.021574 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" containerName="extract-content" Dec 09 11:31:03 crc kubenswrapper[4849]: E1209 11:31:03.021598 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" containerName="registry-server" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.021607 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" containerName="registry-server" Dec 09 11:31:03 crc kubenswrapper[4849]: E1209 11:31:03.021618 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6edbbd-c246-4696-ac45-efb4c27bbd1b" containerName="extract-utilities" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.021626 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6edbbd-c246-4696-ac45-efb4c27bbd1b" containerName="extract-utilities" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.021790 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="91c773f3-2b45-488a-9b3c-5c0f2255f5cc" containerName="registry-server" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.021807 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="33858531-f998-4cee-b45d-9d5cd8b45f2e" containerName="registry-server" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.021816 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea602d1-ec9a-4a2b-8b4f-935d9ff4514a" containerName="registry-server" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.021824 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3a8f6d-222d-4fee-a997-b30bb399b6be" containerName="registry-server" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.021835 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b6edbbd-c246-4696-ac45-efb4c27bbd1b" containerName="registry-server" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.024347 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dhpb4" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.026834 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dhpb4"] Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.032127 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.099088 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6fc1b93-1648-4dea-b4ed-8eb4e307011a-utilities\") pod \"certified-operators-dhpb4\" (UID: \"e6fc1b93-1648-4dea-b4ed-8eb4e307011a\") " pod="openshift-marketplace/certified-operators-dhpb4" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.099172 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6fc1b93-1648-4dea-b4ed-8eb4e307011a-catalog-content\") pod \"certified-operators-dhpb4\" (UID: \"e6fc1b93-1648-4dea-b4ed-8eb4e307011a\") " pod="openshift-marketplace/certified-operators-dhpb4" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.099210 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hn9k\" (UniqueName: \"kubernetes.io/projected/e6fc1b93-1648-4dea-b4ed-8eb4e307011a-kube-api-access-6hn9k\") pod \"certified-operators-dhpb4\" (UID: \"e6fc1b93-1648-4dea-b4ed-8eb4e307011a\") " pod="openshift-marketplace/certified-operators-dhpb4" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.201177 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6fc1b93-1648-4dea-b4ed-8eb4e307011a-catalog-content\") pod \"certified-operators-dhpb4\" (UID: \"e6fc1b93-1648-4dea-b4ed-8eb4e307011a\") " pod="openshift-marketplace/certified-operators-dhpb4" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.201257 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hn9k\" (UniqueName: \"kubernetes.io/projected/e6fc1b93-1648-4dea-b4ed-8eb4e307011a-kube-api-access-6hn9k\") pod \"certified-operators-dhpb4\" (UID: \"e6fc1b93-1648-4dea-b4ed-8eb4e307011a\") " pod="openshift-marketplace/certified-operators-dhpb4" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.201325 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6fc1b93-1648-4dea-b4ed-8eb4e307011a-utilities\") pod \"certified-operators-dhpb4\" (UID: \"e6fc1b93-1648-4dea-b4ed-8eb4e307011a\") " pod="openshift-marketplace/certified-operators-dhpb4" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.202016 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6fc1b93-1648-4dea-b4ed-8eb4e307011a-catalog-content\") pod \"certified-operators-dhpb4\" (UID: \"e6fc1b93-1648-4dea-b4ed-8eb4e307011a\") " pod="openshift-marketplace/certified-operators-dhpb4" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.202087 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6fc1b93-1648-4dea-b4ed-8eb4e307011a-utilities\") pod \"certified-operators-dhpb4\" (UID: \"e6fc1b93-1648-4dea-b4ed-8eb4e307011a\") " pod="openshift-marketplace/certified-operators-dhpb4" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.225029 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hn9k\" (UniqueName: \"kubernetes.io/projected/e6fc1b93-1648-4dea-b4ed-8eb4e307011a-kube-api-access-6hn9k\") pod \"certified-operators-dhpb4\" (UID: \"e6fc1b93-1648-4dea-b4ed-8eb4e307011a\") " pod="openshift-marketplace/certified-operators-dhpb4" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.251004 4849 generic.go:334] "Generic (PLEG): container finished" podID="591a8321-876b-43fc-a46e-9e632c31e6ad" containerID="6899017d50741bd93dc25729a4f0074e86af3311281c424b7614de3eb9da2bce" exitCode=0 Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.251075 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rq2m" event={"ID":"591a8321-876b-43fc-a46e-9e632c31e6ad","Type":"ContainerDied","Data":"6899017d50741bd93dc25729a4f0074e86af3311281c424b7614de3eb9da2bce"} Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.340347 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dhpb4" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.576726 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dhpb4"] Dec 09 11:31:03 crc kubenswrapper[4849]: W1209 11:31:03.588954 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6fc1b93_1648_4dea_b4ed_8eb4e307011a.slice/crio-9b9de509c7dc017f90fe5ebc906c2a5842d2db458ad1fdd559f72f693f63f7eb WatchSource:0}: Error finding container 9b9de509c7dc017f90fe5ebc906c2a5842d2db458ad1fdd559f72f693f63f7eb: Status 404 returned error can't find the container with id 9b9de509c7dc017f90fe5ebc906c2a5842d2db458ad1fdd559f72f693f63f7eb Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.621936 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fw7ws"] Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.623020 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fw7ws" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.633943 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.643023 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fw7ws"] Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.716292 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrd9k\" (UniqueName: \"kubernetes.io/projected/1d0053b5-2860-49fc-98d9-a9d08c9d6b19-kube-api-access-hrd9k\") pod \"redhat-marketplace-fw7ws\" (UID: \"1d0053b5-2860-49fc-98d9-a9d08c9d6b19\") " pod="openshift-marketplace/redhat-marketplace-fw7ws" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.716363 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d0053b5-2860-49fc-98d9-a9d08c9d6b19-utilities\") pod \"redhat-marketplace-fw7ws\" (UID: \"1d0053b5-2860-49fc-98d9-a9d08c9d6b19\") " pod="openshift-marketplace/redhat-marketplace-fw7ws" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.716443 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d0053b5-2860-49fc-98d9-a9d08c9d6b19-catalog-content\") pod \"redhat-marketplace-fw7ws\" (UID: \"1d0053b5-2860-49fc-98d9-a9d08c9d6b19\") " pod="openshift-marketplace/redhat-marketplace-fw7ws" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.817628 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrd9k\" (UniqueName: \"kubernetes.io/projected/1d0053b5-2860-49fc-98d9-a9d08c9d6b19-kube-api-access-hrd9k\") pod \"redhat-marketplace-fw7ws\" (UID: \"1d0053b5-2860-49fc-98d9-a9d08c9d6b19\") " pod="openshift-marketplace/redhat-marketplace-fw7ws" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.818018 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d0053b5-2860-49fc-98d9-a9d08c9d6b19-utilities\") pod \"redhat-marketplace-fw7ws\" (UID: \"1d0053b5-2860-49fc-98d9-a9d08c9d6b19\") " pod="openshift-marketplace/redhat-marketplace-fw7ws" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.818086 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d0053b5-2860-49fc-98d9-a9d08c9d6b19-catalog-content\") pod \"redhat-marketplace-fw7ws\" (UID: \"1d0053b5-2860-49fc-98d9-a9d08c9d6b19\") " pod="openshift-marketplace/redhat-marketplace-fw7ws" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.819048 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d0053b5-2860-49fc-98d9-a9d08c9d6b19-catalog-content\") pod \"redhat-marketplace-fw7ws\" (UID: \"1d0053b5-2860-49fc-98d9-a9d08c9d6b19\") " pod="openshift-marketplace/redhat-marketplace-fw7ws" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.820564 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d0053b5-2860-49fc-98d9-a9d08c9d6b19-utilities\") pod \"redhat-marketplace-fw7ws\" (UID: \"1d0053b5-2860-49fc-98d9-a9d08c9d6b19\") " pod="openshift-marketplace/redhat-marketplace-fw7ws" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.838452 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrd9k\" (UniqueName: \"kubernetes.io/projected/1d0053b5-2860-49fc-98d9-a9d08c9d6b19-kube-api-access-hrd9k\") pod \"redhat-marketplace-fw7ws\" (UID: \"1d0053b5-2860-49fc-98d9-a9d08c9d6b19\") " pod="openshift-marketplace/redhat-marketplace-fw7ws" Dec 09 11:31:03 crc kubenswrapper[4849]: I1209 11:31:03.988565 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fw7ws" Dec 09 11:31:04 crc kubenswrapper[4849]: I1209 11:31:04.224238 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fw7ws"] Dec 09 11:31:04 crc kubenswrapper[4849]: W1209 11:31:04.230585 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d0053b5_2860_49fc_98d9_a9d08c9d6b19.slice/crio-c3fa0ae0fe46b9b04936dea263580c95c258a4288b88430c5823694d3383f0a7 WatchSource:0}: Error finding container c3fa0ae0fe46b9b04936dea263580c95c258a4288b88430c5823694d3383f0a7: Status 404 returned error can't find the container with id c3fa0ae0fe46b9b04936dea263580c95c258a4288b88430c5823694d3383f0a7 Dec 09 11:31:04 crc kubenswrapper[4849]: I1209 11:31:04.264184 4849 generic.go:334] "Generic (PLEG): container finished" podID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" containerID="50951a372815ae55fcd0afddcc0ce93d27f3363de65cfd4641a56a371f25ae78" exitCode=0 Dec 09 11:31:04 crc kubenswrapper[4849]: I1209 11:31:04.264252 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhpb4" event={"ID":"e6fc1b93-1648-4dea-b4ed-8eb4e307011a","Type":"ContainerDied","Data":"50951a372815ae55fcd0afddcc0ce93d27f3363de65cfd4641a56a371f25ae78"} Dec 09 11:31:04 crc kubenswrapper[4849]: I1209 11:31:04.264294 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhpb4" event={"ID":"e6fc1b93-1648-4dea-b4ed-8eb4e307011a","Type":"ContainerStarted","Data":"9b9de509c7dc017f90fe5ebc906c2a5842d2db458ad1fdd559f72f693f63f7eb"} Dec 09 11:31:04 crc kubenswrapper[4849]: I1209 11:31:04.285428 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rq2m" event={"ID":"591a8321-876b-43fc-a46e-9e632c31e6ad","Type":"ContainerStarted","Data":"41bd0fbb6ec519b6a669456ffa14dfa7dae8a27c973281db8ff0517789da27cc"} Dec 09 11:31:04 crc kubenswrapper[4849]: I1209 11:31:04.288079 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fw7ws" event={"ID":"1d0053b5-2860-49fc-98d9-a9d08c9d6b19","Type":"ContainerStarted","Data":"c3fa0ae0fe46b9b04936dea263580c95c258a4288b88430c5823694d3383f0a7"} Dec 09 11:31:04 crc kubenswrapper[4849]: I1209 11:31:04.542727 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b3a8f6d-222d-4fee-a997-b30bb399b6be" path="/var/lib/kubelet/pods/2b3a8f6d-222d-4fee-a997-b30bb399b6be/volumes" Dec 09 11:31:04 crc kubenswrapper[4849]: I1209 11:31:04.543559 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33858531-f998-4cee-b45d-9d5cd8b45f2e" path="/var/lib/kubelet/pods/33858531-f998-4cee-b45d-9d5cd8b45f2e/volumes" Dec 09 11:31:04 crc kubenswrapper[4849]: I1209 11:31:04.547503 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19e5981-0356-4c0d-842b-211cfbef65b3" path="/var/lib/kubelet/pods/f19e5981-0356-4c0d-842b-211cfbef65b3/volumes" Dec 09 11:31:05 crc kubenswrapper[4849]: I1209 11:31:05.294479 4849 generic.go:334] "Generic (PLEG): container finished" podID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" containerID="ececc1e1ce917ecede915857c61969803b7eaf38803b18eedfab173236d001a1" exitCode=0 Dec 09 11:31:05 crc kubenswrapper[4849]: I1209 11:31:05.294565 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhpb4" event={"ID":"e6fc1b93-1648-4dea-b4ed-8eb4e307011a","Type":"ContainerDied","Data":"ececc1e1ce917ecede915857c61969803b7eaf38803b18eedfab173236d001a1"} Dec 09 11:31:05 crc kubenswrapper[4849]: I1209 11:31:05.300308 4849 generic.go:334] "Generic (PLEG): container finished" podID="591a8321-876b-43fc-a46e-9e632c31e6ad" containerID="41bd0fbb6ec519b6a669456ffa14dfa7dae8a27c973281db8ff0517789da27cc" exitCode=0 Dec 09 11:31:05 crc kubenswrapper[4849]: I1209 11:31:05.300378 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rq2m" event={"ID":"591a8321-876b-43fc-a46e-9e632c31e6ad","Type":"ContainerDied","Data":"41bd0fbb6ec519b6a669456ffa14dfa7dae8a27c973281db8ff0517789da27cc"} Dec 09 11:31:05 crc kubenswrapper[4849]: I1209 11:31:05.308240 4849 generic.go:334] "Generic (PLEG): container finished" podID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" containerID="58864fa460f2d8ee92d535b8834bdedd857f4c96ee614518dc3322affa1c3805" exitCode=0 Dec 09 11:31:05 crc kubenswrapper[4849]: I1209 11:31:05.308291 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fw7ws" event={"ID":"1d0053b5-2860-49fc-98d9-a9d08c9d6b19","Type":"ContainerDied","Data":"58864fa460f2d8ee92d535b8834bdedd857f4c96ee614518dc3322affa1c3805"} Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.017341 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cq7jk"] Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.018449 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cq7jk" Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.021788 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.026946 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cq7jk"] Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.045772 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccdl7\" (UniqueName: \"kubernetes.io/projected/22b13fa0-7feb-45d4-8415-1834db2f96c5-kube-api-access-ccdl7\") pod \"community-operators-cq7jk\" (UID: \"22b13fa0-7feb-45d4-8415-1834db2f96c5\") " pod="openshift-marketplace/community-operators-cq7jk" Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.045816 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22b13fa0-7feb-45d4-8415-1834db2f96c5-catalog-content\") pod \"community-operators-cq7jk\" (UID: \"22b13fa0-7feb-45d4-8415-1834db2f96c5\") " pod="openshift-marketplace/community-operators-cq7jk" Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.045845 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22b13fa0-7feb-45d4-8415-1834db2f96c5-utilities\") pod \"community-operators-cq7jk\" (UID: \"22b13fa0-7feb-45d4-8415-1834db2f96c5\") " pod="openshift-marketplace/community-operators-cq7jk" Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.147424 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22b13fa0-7feb-45d4-8415-1834db2f96c5-utilities\") pod \"community-operators-cq7jk\" (UID: \"22b13fa0-7feb-45d4-8415-1834db2f96c5\") " pod="openshift-marketplace/community-operators-cq7jk" Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.147741 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccdl7\" (UniqueName: \"kubernetes.io/projected/22b13fa0-7feb-45d4-8415-1834db2f96c5-kube-api-access-ccdl7\") pod \"community-operators-cq7jk\" (UID: \"22b13fa0-7feb-45d4-8415-1834db2f96c5\") " pod="openshift-marketplace/community-operators-cq7jk" Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.147850 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22b13fa0-7feb-45d4-8415-1834db2f96c5-catalog-content\") pod \"community-operators-cq7jk\" (UID: \"22b13fa0-7feb-45d4-8415-1834db2f96c5\") " pod="openshift-marketplace/community-operators-cq7jk" Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.148077 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22b13fa0-7feb-45d4-8415-1834db2f96c5-utilities\") pod \"community-operators-cq7jk\" (UID: \"22b13fa0-7feb-45d4-8415-1834db2f96c5\") " pod="openshift-marketplace/community-operators-cq7jk" Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.148375 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22b13fa0-7feb-45d4-8415-1834db2f96c5-catalog-content\") pod \"community-operators-cq7jk\" (UID: \"22b13fa0-7feb-45d4-8415-1834db2f96c5\") " pod="openshift-marketplace/community-operators-cq7jk" Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.167362 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccdl7\" (UniqueName: \"kubernetes.io/projected/22b13fa0-7feb-45d4-8415-1834db2f96c5-kube-api-access-ccdl7\") pod \"community-operators-cq7jk\" (UID: \"22b13fa0-7feb-45d4-8415-1834db2f96c5\") " pod="openshift-marketplace/community-operators-cq7jk" Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.315675 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dhpb4" event={"ID":"e6fc1b93-1648-4dea-b4ed-8eb4e307011a","Type":"ContainerStarted","Data":"e4d3f5c5531174c0fe5be7314de4abca6ef6fd0a611c2faa12e71ffe4b6852e0"} Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.318035 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9rq2m" event={"ID":"591a8321-876b-43fc-a46e-9e632c31e6ad","Type":"ContainerStarted","Data":"74573ac3b5528dddb968b2e642fe1cdefb9ab85bd9fcbd335fcb5b825c0c1a46"} Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.319307 4849 generic.go:334] "Generic (PLEG): container finished" podID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" containerID="c39cf9d5a20b54c3e8087d6a9f4bdbbc5d35be0074acf634f2175f5d41323f71" exitCode=0 Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.319351 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fw7ws" event={"ID":"1d0053b5-2860-49fc-98d9-a9d08c9d6b19","Type":"ContainerDied","Data":"c39cf9d5a20b54c3e8087d6a9f4bdbbc5d35be0074acf634f2175f5d41323f71"} Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.344086 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cq7jk" Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.364915 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dhpb4" podStartSLOduration=2.983353979 podStartE2EDuration="4.364900841s" podCreationTimestamp="2025-12-09 11:31:02 +0000 UTC" firstStartedPulling="2025-12-09 11:31:04.27070808 +0000 UTC m=+246.810592396" lastFinishedPulling="2025-12-09 11:31:05.652254932 +0000 UTC m=+248.192139258" observedRunningTime="2025-12-09 11:31:06.341728889 +0000 UTC m=+248.881613215" watchObservedRunningTime="2025-12-09 11:31:06.364900841 +0000 UTC m=+248.904785157" Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.392915 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9rq2m" podStartSLOduration=3.92317441 podStartE2EDuration="6.392895774s" podCreationTimestamp="2025-12-09 11:31:00 +0000 UTC" firstStartedPulling="2025-12-09 11:31:03.252602584 +0000 UTC m=+245.792486910" lastFinishedPulling="2025-12-09 11:31:05.722323958 +0000 UTC m=+248.262208274" observedRunningTime="2025-12-09 11:31:06.389950879 +0000 UTC m=+248.929835195" watchObservedRunningTime="2025-12-09 11:31:06.392895774 +0000 UTC m=+248.932780090" Dec 09 11:31:06 crc kubenswrapper[4849]: I1209 11:31:06.568354 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cq7jk"] Dec 09 11:31:06 crc kubenswrapper[4849]: W1209 11:31:06.584571 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22b13fa0_7feb_45d4_8415_1834db2f96c5.slice/crio-7fb27b7bde6d72bc1cb7cf2d9f8632fe9f594d433ce637930f1e868c49a22653 WatchSource:0}: Error finding container 7fb27b7bde6d72bc1cb7cf2d9f8632fe9f594d433ce637930f1e868c49a22653: Status 404 returned error can't find the container with id 7fb27b7bde6d72bc1cb7cf2d9f8632fe9f594d433ce637930f1e868c49a22653 Dec 09 11:31:07 crc kubenswrapper[4849]: I1209 11:31:07.329015 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fw7ws" event={"ID":"1d0053b5-2860-49fc-98d9-a9d08c9d6b19","Type":"ContainerStarted","Data":"9af32c9355ea687f01b7d96d81387ac815a2fba343ee51f63fdfd69aed948cff"} Dec 09 11:31:07 crc kubenswrapper[4849]: I1209 11:31:07.331891 4849 generic.go:334] "Generic (PLEG): container finished" podID="22b13fa0-7feb-45d4-8415-1834db2f96c5" containerID="7fe74173a3425d56a6b2f32f9f063870de17a9b68bb90f41a400d7a32a6ceb07" exitCode=0 Dec 09 11:31:07 crc kubenswrapper[4849]: I1209 11:31:07.331948 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cq7jk" event={"ID":"22b13fa0-7feb-45d4-8415-1834db2f96c5","Type":"ContainerDied","Data":"7fe74173a3425d56a6b2f32f9f063870de17a9b68bb90f41a400d7a32a6ceb07"} Dec 09 11:31:07 crc kubenswrapper[4849]: I1209 11:31:07.331983 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cq7jk" event={"ID":"22b13fa0-7feb-45d4-8415-1834db2f96c5","Type":"ContainerStarted","Data":"7fb27b7bde6d72bc1cb7cf2d9f8632fe9f594d433ce637930f1e868c49a22653"} Dec 09 11:31:07 crc kubenswrapper[4849]: I1209 11:31:07.352621 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fw7ws" podStartSLOduration=2.865893889 podStartE2EDuration="4.352603521s" podCreationTimestamp="2025-12-09 11:31:03 +0000 UTC" firstStartedPulling="2025-12-09 11:31:05.309819642 +0000 UTC m=+247.849703958" lastFinishedPulling="2025-12-09 11:31:06.796529274 +0000 UTC m=+249.336413590" observedRunningTime="2025-12-09 11:31:07.349052531 +0000 UTC m=+249.888936857" watchObservedRunningTime="2025-12-09 11:31:07.352603521 +0000 UTC m=+249.892487837" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.831953 4849 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.833269 4849 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.833634 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0" gracePeriod=15 Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.833806 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.834211 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce" gracePeriod=15 Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.834271 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b" gracePeriod=15 Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.834316 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a" gracePeriod=15 Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.834377 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286" gracePeriod=15 Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.834861 4849 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 11:31:08 crc kubenswrapper[4849]: E1209 11:31:08.835012 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.835027 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 11:31:08 crc kubenswrapper[4849]: E1209 11:31:08.835044 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.835052 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 09 11:31:08 crc kubenswrapper[4849]: E1209 11:31:08.835062 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.835070 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 11:31:08 crc kubenswrapper[4849]: E1209 11:31:08.835080 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.835086 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 11:31:08 crc kubenswrapper[4849]: E1209 11:31:08.835093 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.835102 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 11:31:08 crc kubenswrapper[4849]: E1209 11:31:08.835112 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.835119 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 11:31:08 crc kubenswrapper[4849]: E1209 11:31:08.835129 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.835136 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 11:31:08 crc kubenswrapper[4849]: E1209 11:31:08.835146 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.835152 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.835283 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.835297 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.835306 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.835318 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.835326 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.835337 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.835577 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.879219 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.879262 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.879288 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.879331 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.879352 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.879373 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.879398 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.879433 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980480 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980522 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980560 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980575 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980593 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980612 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980633 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980651 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980710 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980745 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980767 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980790 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980808 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980826 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980842 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:08 crc kubenswrapper[4849]: I1209 11:31:08.980861 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.346932 4849 generic.go:334] "Generic (PLEG): container finished" podID="22b13fa0-7feb-45d4-8415-1834db2f96c5" containerID="ca336784091c70368af830080a3bf64936ec9c8c8837c4c28d792bc51246a28b" exitCode=0 Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.347133 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cq7jk" event={"ID":"22b13fa0-7feb-45d4-8415-1834db2f96c5","Type":"ContainerDied","Data":"ca336784091c70368af830080a3bf64936ec9c8c8837c4c28d792bc51246a28b"} Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.348140 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.348495 4849 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:09 crc kubenswrapper[4849]: E1209 11:31:09.352463 4849 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-cq7jk.187f88b035e2ec8e openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-cq7jk,UID:22b13fa0-7feb-45d4-8415-1834db2f96c5,APIVersion:v1,ResourceVersion:29563,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 11:31:09.35220955 +0000 UTC m=+251.892093866,LastTimestamp:2025-12-09 11:31:09.35220955 +0000 UTC m=+251.892093866,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.354472 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.355478 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.356279 4849 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce" exitCode=0 Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.356300 4849 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b" exitCode=0 Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.356309 4849 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a" exitCode=0 Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.356315 4849 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286" exitCode=2 Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.356375 4849 scope.go:117] "RemoveContainer" containerID="8fa7063058921985a8e0edb257bc171dd5cfbeffb2640feaa9a59ca634a4d09b" Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.362478 4849 generic.go:334] "Generic (PLEG): container finished" podID="f458026c-1433-4a58-b921-1088b8e9a509" containerID="e4e407d3154b818a06cb66e15da50241d23ab28a6d3aba0de06a9012afc1069d" exitCode=0 Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.362523 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f458026c-1433-4a58-b921-1088b8e9a509","Type":"ContainerDied","Data":"e4e407d3154b818a06cb66e15da50241d23ab28a6d3aba0de06a9012afc1069d"} Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.363069 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.363262 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.363530 4849 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:09 crc kubenswrapper[4849]: E1209 11:31:09.555590 4849 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:09 crc kubenswrapper[4849]: E1209 11:31:09.555972 4849 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:09 crc kubenswrapper[4849]: E1209 11:31:09.556359 4849 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:09 crc kubenswrapper[4849]: E1209 11:31:09.556561 4849 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:09 crc kubenswrapper[4849]: E1209 11:31:09.556778 4849 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:09 crc kubenswrapper[4849]: I1209 11:31:09.556825 4849 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 09 11:31:09 crc kubenswrapper[4849]: E1209 11:31:09.557024 4849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Dec 09 11:31:09 crc kubenswrapper[4849]: E1209 11:31:09.757813 4849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Dec 09 11:31:10 crc kubenswrapper[4849]: E1209 11:31:10.158729 4849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Dec 09 11:31:10 crc kubenswrapper[4849]: I1209 11:31:10.374716 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 11:31:10 crc kubenswrapper[4849]: E1209 11:31:10.960006 4849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Dec 09 11:31:10 crc kubenswrapper[4849]: I1209 11:31:10.960732 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9rq2m" Dec 09 11:31:10 crc kubenswrapper[4849]: I1209 11:31:10.960776 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9rq2m" Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.382523 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f458026c-1433-4a58-b921-1088b8e9a509","Type":"ContainerDied","Data":"86f0159f9f7da912e56286471b1c2886603b87b314f02ee1fbda249f6c67d46e"} Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.382870 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86f0159f9f7da912e56286471b1c2886603b87b314f02ee1fbda249f6c67d46e" Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.822526 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.823216 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.823377 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.937159 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.938299 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.938777 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.938845 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f458026c-1433-4a58-b921-1088b8e9a509-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f458026c-1433-4a58-b921-1088b8e9a509" (UID: "f458026c-1433-4a58-b921-1088b8e9a509"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.938806 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f458026c-1433-4a58-b921-1088b8e9a509-kubelet-dir\") pod \"f458026c-1433-4a58-b921-1088b8e9a509\" (UID: \"f458026c-1433-4a58-b921-1088b8e9a509\") " Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.938932 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f458026c-1433-4a58-b921-1088b8e9a509-kube-api-access\") pod \"f458026c-1433-4a58-b921-1088b8e9a509\" (UID: \"f458026c-1433-4a58-b921-1088b8e9a509\") " Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.938967 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f458026c-1433-4a58-b921-1088b8e9a509-var-lock\") pod \"f458026c-1433-4a58-b921-1088b8e9a509\" (UID: \"f458026c-1433-4a58-b921-1088b8e9a509\") " Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.939133 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.939141 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f458026c-1433-4a58-b921-1088b8e9a509-var-lock" (OuterVolumeSpecName: "var-lock") pod "f458026c-1433-4a58-b921-1088b8e9a509" (UID: "f458026c-1433-4a58-b921-1088b8e9a509"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.939386 4849 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.939694 4849 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f458026c-1433-4a58-b921-1088b8e9a509-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.939715 4849 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f458026c-1433-4a58-b921-1088b8e9a509-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:11 crc kubenswrapper[4849]: I1209 11:31:11.946093 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f458026c-1433-4a58-b921-1088b8e9a509-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f458026c-1433-4a58-b921-1088b8e9a509" (UID: "f458026c-1433-4a58-b921-1088b8e9a509"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.040391 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.040860 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.040922 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.041259 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f458026c-1433-4a58-b921-1088b8e9a509-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.041320 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.041352 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.041373 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.076842 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9rq2m" podUID="591a8321-876b-43fc-a46e-9e632c31e6ad" containerName="registry-server" probeResult="failure" output=< Dec 09 11:31:12 crc kubenswrapper[4849]: timeout: failed to connect service ":50051" within 1s Dec 09 11:31:12 crc kubenswrapper[4849]: > Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.142613 4849 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.142657 4849 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.142671 4849 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.390671 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.391524 4849 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0" exitCode=0 Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.391623 4849 scope.go:117] "RemoveContainer" containerID="5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.391680 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.398840 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.399296 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cq7jk" event={"ID":"22b13fa0-7feb-45d4-8415-1834db2f96c5","Type":"ContainerStarted","Data":"107fa69051ae793e65ebe6ebba6a92e7cbc408ed628121a61ff7ddf047b3e187"} Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.399925 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.400133 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.400733 4849 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.409856 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.410314 4849 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.411156 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.414154 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.414426 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.414620 4849 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.420705 4849 scope.go:117] "RemoveContainer" containerID="ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.435196 4849 scope.go:117] "RemoveContainer" containerID="a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.447278 4849 scope.go:117] "RemoveContainer" containerID="25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.463154 4849 scope.go:117] "RemoveContainer" containerID="65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.476812 4849 scope.go:117] "RemoveContainer" containerID="1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.505321 4849 scope.go:117] "RemoveContainer" containerID="5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce" Dec 09 11:31:12 crc kubenswrapper[4849]: E1209 11:31:12.505697 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\": container with ID starting with 5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce not found: ID does not exist" containerID="5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.505727 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce"} err="failed to get container status \"5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\": rpc error: code = NotFound desc = could not find container \"5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce\": container with ID starting with 5c3805a42dc680c6456ce1aeeea74666d74eea43380cfdbd5f705a5414dcd7ce not found: ID does not exist" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.505769 4849 scope.go:117] "RemoveContainer" containerID="ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b" Dec 09 11:31:12 crc kubenswrapper[4849]: E1209 11:31:12.506189 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\": container with ID starting with ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b not found: ID does not exist" containerID="ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.506206 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b"} err="failed to get container status \"ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\": rpc error: code = NotFound desc = could not find container \"ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b\": container with ID starting with ae439087cae6c7aa76b05d4b4847e60be7b36017d16eecaf805daff2f35e4f0b not found: ID does not exist" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.506218 4849 scope.go:117] "RemoveContainer" containerID="a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a" Dec 09 11:31:12 crc kubenswrapper[4849]: E1209 11:31:12.507077 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\": container with ID starting with a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a not found: ID does not exist" containerID="a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.507096 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a"} err="failed to get container status \"a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\": rpc error: code = NotFound desc = could not find container \"a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a\": container with ID starting with a123746dfdc1b2662bec4433278b45252d1e0455c361d8456d139f70e4bcf47a not found: ID does not exist" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.507109 4849 scope.go:117] "RemoveContainer" containerID="25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286" Dec 09 11:31:12 crc kubenswrapper[4849]: E1209 11:31:12.507601 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\": container with ID starting with 25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286 not found: ID does not exist" containerID="25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.507620 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286"} err="failed to get container status \"25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\": rpc error: code = NotFound desc = could not find container \"25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286\": container with ID starting with 25e77d35edda7dd0c709c9f451f8b09e6af2a7be31e01071adc67948723e5286 not found: ID does not exist" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.507637 4849 scope.go:117] "RemoveContainer" containerID="65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0" Dec 09 11:31:12 crc kubenswrapper[4849]: E1209 11:31:12.507928 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\": container with ID starting with 65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0 not found: ID does not exist" containerID="65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.507949 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0"} err="failed to get container status \"65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\": rpc error: code = NotFound desc = could not find container \"65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0\": container with ID starting with 65a82ce126609bed353822001df90b9731bb2eac39583a58c583e1ccded88af0 not found: ID does not exist" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.507965 4849 scope.go:117] "RemoveContainer" containerID="1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380" Dec 09 11:31:12 crc kubenswrapper[4849]: E1209 11:31:12.508559 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\": container with ID starting with 1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380 not found: ID does not exist" containerID="1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.508584 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380"} err="failed to get container status \"1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\": rpc error: code = NotFound desc = could not find container \"1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380\": container with ID starting with 1f6ace7ca890d123e6905a88f7eedccc48a239b1f18147c7a148db16c6fe9380 not found: ID does not exist" Dec 09 11:31:12 crc kubenswrapper[4849]: I1209 11:31:12.545780 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 09 11:31:12 crc kubenswrapper[4849]: E1209 11:31:12.561704 4849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="3.2s" Dec 09 11:31:13 crc kubenswrapper[4849]: I1209 11:31:13.341400 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dhpb4" Dec 09 11:31:13 crc kubenswrapper[4849]: I1209 11:31:13.341797 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dhpb4" Dec 09 11:31:13 crc kubenswrapper[4849]: I1209 11:31:13.389656 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dhpb4" Dec 09 11:31:13 crc kubenswrapper[4849]: I1209 11:31:13.390224 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:13 crc kubenswrapper[4849]: I1209 11:31:13.391620 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:13 crc kubenswrapper[4849]: I1209 11:31:13.392243 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:13 crc kubenswrapper[4849]: I1209 11:31:13.454952 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dhpb4" Dec 09 11:31:13 crc kubenswrapper[4849]: I1209 11:31:13.455396 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:13 crc kubenswrapper[4849]: I1209 11:31:13.455585 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:13 crc kubenswrapper[4849]: I1209 11:31:13.455722 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:13 crc kubenswrapper[4849]: E1209 11:31:13.891493 4849 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:13 crc kubenswrapper[4849]: I1209 11:31:13.892291 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:13 crc kubenswrapper[4849]: W1209 11:31:13.912873 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-b6873589e2247882a73f15fc4c607529af7ecd04b8eee147ac1287fae7ce6624 WatchSource:0}: Error finding container b6873589e2247882a73f15fc4c607529af7ecd04b8eee147ac1287fae7ce6624: Status 404 returned error can't find the container with id b6873589e2247882a73f15fc4c607529af7ecd04b8eee147ac1287fae7ce6624 Dec 09 11:31:13 crc kubenswrapper[4849]: I1209 11:31:13.989239 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fw7ws" Dec 09 11:31:13 crc kubenswrapper[4849]: I1209 11:31:13.989286 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fw7ws" Dec 09 11:31:14 crc kubenswrapper[4849]: I1209 11:31:14.038011 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fw7ws" Dec 09 11:31:14 crc kubenswrapper[4849]: I1209 11:31:14.038685 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:14 crc kubenswrapper[4849]: I1209 11:31:14.039289 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:14 crc kubenswrapper[4849]: I1209 11:31:14.039747 4849 status_manager.go:851] "Failed to get status for pod" podUID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" pod="openshift-marketplace/redhat-marketplace-fw7ws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fw7ws\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:14 crc kubenswrapper[4849]: I1209 11:31:14.039991 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:14 crc kubenswrapper[4849]: I1209 11:31:14.415108 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b6873589e2247882a73f15fc4c607529af7ecd04b8eee147ac1287fae7ce6624"} Dec 09 11:31:14 crc kubenswrapper[4849]: I1209 11:31:14.620034 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fw7ws" Dec 09 11:31:14 crc kubenswrapper[4849]: I1209 11:31:14.620645 4849 status_manager.go:851] "Failed to get status for pod" podUID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" pod="openshift-marketplace/redhat-marketplace-fw7ws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fw7ws\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:14 crc kubenswrapper[4849]: I1209 11:31:14.621049 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:14 crc kubenswrapper[4849]: I1209 11:31:14.621583 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:14 crc kubenswrapper[4849]: I1209 11:31:14.622104 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:15 crc kubenswrapper[4849]: I1209 11:31:15.422234 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"26a1c34312400e03f34532cf6efbb16ac3e751ad0118037170e97aa3337021ab"} Dec 09 11:31:15 crc kubenswrapper[4849]: E1209 11:31:15.423082 4849 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:15 crc kubenswrapper[4849]: I1209 11:31:15.422947 4849 status_manager.go:851] "Failed to get status for pod" podUID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" pod="openshift-marketplace/redhat-marketplace-fw7ws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fw7ws\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:15 crc kubenswrapper[4849]: I1209 11:31:15.423552 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:15 crc kubenswrapper[4849]: I1209 11:31:15.423866 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:15 crc kubenswrapper[4849]: I1209 11:31:15.424196 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:15 crc kubenswrapper[4849]: E1209 11:31:15.763180 4849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="6.4s" Dec 09 11:31:15 crc kubenswrapper[4849]: E1209 11:31:15.988560 4849 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-cq7jk.187f88b035e2ec8e openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-cq7jk,UID:22b13fa0-7feb-45d4-8415-1834db2f96c5,APIVersion:v1,ResourceVersion:29563,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 11:31:09.35220955 +0000 UTC m=+251.892093866,LastTimestamp:2025-12-09 11:31:09.35220955 +0000 UTC m=+251.892093866,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 11:31:16 crc kubenswrapper[4849]: I1209 11:31:16.344785 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cq7jk" Dec 09 11:31:16 crc kubenswrapper[4849]: I1209 11:31:16.344862 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cq7jk" Dec 09 11:31:16 crc kubenswrapper[4849]: I1209 11:31:16.383836 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cq7jk" Dec 09 11:31:16 crc kubenswrapper[4849]: I1209 11:31:16.384495 4849 status_manager.go:851] "Failed to get status for pod" podUID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" pod="openshift-marketplace/redhat-marketplace-fw7ws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fw7ws\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:16 crc kubenswrapper[4849]: I1209 11:31:16.384704 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:16 crc kubenswrapper[4849]: I1209 11:31:16.384860 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:16 crc kubenswrapper[4849]: I1209 11:31:16.385013 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:16 crc kubenswrapper[4849]: E1209 11:31:16.428650 4849 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:16 crc kubenswrapper[4849]: I1209 11:31:16.463251 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cq7jk" Dec 09 11:31:16 crc kubenswrapper[4849]: I1209 11:31:16.464035 4849 status_manager.go:851] "Failed to get status for pod" podUID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" pod="openshift-marketplace/redhat-marketplace-fw7ws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fw7ws\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:16 crc kubenswrapper[4849]: I1209 11:31:16.464728 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:16 crc kubenswrapper[4849]: I1209 11:31:16.465205 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:16 crc kubenswrapper[4849]: I1209 11:31:16.465560 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:18 crc kubenswrapper[4849]: I1209 11:31:18.540346 4849 status_manager.go:851] "Failed to get status for pod" podUID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" pod="openshift-marketplace/redhat-marketplace-fw7ws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fw7ws\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:18 crc kubenswrapper[4849]: I1209 11:31:18.541736 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:18 crc kubenswrapper[4849]: I1209 11:31:18.542096 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:18 crc kubenswrapper[4849]: I1209 11:31:18.542470 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:19 crc kubenswrapper[4849]: I1209 11:31:19.569460 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" podUID="ff9d1831-83f7-46b5-a110-4ef163ec3516" containerName="oauth-openshift" containerID="cri-o://afd6cb7a4933ef64975d981e62fcb54e81c2d71058fc0c2257d3c8d54b8265bb" gracePeriod=15 Dec 09 11:31:19 crc kubenswrapper[4849]: I1209 11:31:19.910524 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:31:19 crc kubenswrapper[4849]: I1209 11:31:19.911376 4849 status_manager.go:851] "Failed to get status for pod" podUID="ff9d1831-83f7-46b5-a110-4ef163ec3516" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-25rtx\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:19 crc kubenswrapper[4849]: I1209 11:31:19.911729 4849 status_manager.go:851] "Failed to get status for pod" podUID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" pod="openshift-marketplace/redhat-marketplace-fw7ws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fw7ws\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:19 crc kubenswrapper[4849]: I1209 11:31:19.911937 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:19 crc kubenswrapper[4849]: I1209 11:31:19.912250 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:19 crc kubenswrapper[4849]: I1209 11:31:19.912548 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.002390 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-serving-cert\") pod \"ff9d1831-83f7-46b5-a110-4ef163ec3516\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.002477 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-idp-0-file-data\") pod \"ff9d1831-83f7-46b5-a110-4ef163ec3516\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.002507 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-router-certs\") pod \"ff9d1831-83f7-46b5-a110-4ef163ec3516\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.002526 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-service-ca\") pod \"ff9d1831-83f7-46b5-a110-4ef163ec3516\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.002564 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-error\") pod \"ff9d1831-83f7-46b5-a110-4ef163ec3516\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.002582 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-provider-selection\") pod \"ff9d1831-83f7-46b5-a110-4ef163ec3516\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.002601 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ff9d1831-83f7-46b5-a110-4ef163ec3516-audit-dir\") pod \"ff9d1831-83f7-46b5-a110-4ef163ec3516\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.002623 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-audit-policies\") pod \"ff9d1831-83f7-46b5-a110-4ef163ec3516\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.002757 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff9d1831-83f7-46b5-a110-4ef163ec3516-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ff9d1831-83f7-46b5-a110-4ef163ec3516" (UID: "ff9d1831-83f7-46b5-a110-4ef163ec3516"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.003308 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ff9d1831-83f7-46b5-a110-4ef163ec3516" (UID: "ff9d1831-83f7-46b5-a110-4ef163ec3516"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.003425 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ff9d1831-83f7-46b5-a110-4ef163ec3516" (UID: "ff9d1831-83f7-46b5-a110-4ef163ec3516"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.003468 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fk78\" (UniqueName: \"kubernetes.io/projected/ff9d1831-83f7-46b5-a110-4ef163ec3516-kube-api-access-2fk78\") pod \"ff9d1831-83f7-46b5-a110-4ef163ec3516\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.003505 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-trusted-ca-bundle\") pod \"ff9d1831-83f7-46b5-a110-4ef163ec3516\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.003539 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-login\") pod \"ff9d1831-83f7-46b5-a110-4ef163ec3516\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.004284 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-ocp-branding-template\") pod \"ff9d1831-83f7-46b5-a110-4ef163ec3516\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.004328 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-cliconfig\") pod \"ff9d1831-83f7-46b5-a110-4ef163ec3516\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.004361 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-session\") pod \"ff9d1831-83f7-46b5-a110-4ef163ec3516\" (UID: \"ff9d1831-83f7-46b5-a110-4ef163ec3516\") " Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.004631 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.004648 4849 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ff9d1831-83f7-46b5-a110-4ef163ec3516-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.004660 4849 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.003978 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ff9d1831-83f7-46b5-a110-4ef163ec3516" (UID: "ff9d1831-83f7-46b5-a110-4ef163ec3516"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.008397 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ff9d1831-83f7-46b5-a110-4ef163ec3516" (UID: "ff9d1831-83f7-46b5-a110-4ef163ec3516"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.010724 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ff9d1831-83f7-46b5-a110-4ef163ec3516" (UID: "ff9d1831-83f7-46b5-a110-4ef163ec3516"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.011308 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ff9d1831-83f7-46b5-a110-4ef163ec3516" (UID: "ff9d1831-83f7-46b5-a110-4ef163ec3516"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.011663 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ff9d1831-83f7-46b5-a110-4ef163ec3516" (UID: "ff9d1831-83f7-46b5-a110-4ef163ec3516"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.013100 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ff9d1831-83f7-46b5-a110-4ef163ec3516" (UID: "ff9d1831-83f7-46b5-a110-4ef163ec3516"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.013727 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ff9d1831-83f7-46b5-a110-4ef163ec3516" (UID: "ff9d1831-83f7-46b5-a110-4ef163ec3516"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.014069 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ff9d1831-83f7-46b5-a110-4ef163ec3516" (UID: "ff9d1831-83f7-46b5-a110-4ef163ec3516"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.015023 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ff9d1831-83f7-46b5-a110-4ef163ec3516" (UID: "ff9d1831-83f7-46b5-a110-4ef163ec3516"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.015585 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ff9d1831-83f7-46b5-a110-4ef163ec3516" (UID: "ff9d1831-83f7-46b5-a110-4ef163ec3516"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.016110 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff9d1831-83f7-46b5-a110-4ef163ec3516-kube-api-access-2fk78" (OuterVolumeSpecName: "kube-api-access-2fk78") pod "ff9d1831-83f7-46b5-a110-4ef163ec3516" (UID: "ff9d1831-83f7-46b5-a110-4ef163ec3516"). InnerVolumeSpecName "kube-api-access-2fk78". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.105444 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.105473 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.105484 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.105498 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.105509 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.105518 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.105529 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.105539 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fk78\" (UniqueName: \"kubernetes.io/projected/ff9d1831-83f7-46b5-a110-4ef163ec3516-kube-api-access-2fk78\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.105548 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.105556 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.105565 4849 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ff9d1831-83f7-46b5-a110-4ef163ec3516-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.454521 4849 generic.go:334] "Generic (PLEG): container finished" podID="ff9d1831-83f7-46b5-a110-4ef163ec3516" containerID="afd6cb7a4933ef64975d981e62fcb54e81c2d71058fc0c2257d3c8d54b8265bb" exitCode=0 Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.454573 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" event={"ID":"ff9d1831-83f7-46b5-a110-4ef163ec3516","Type":"ContainerDied","Data":"afd6cb7a4933ef64975d981e62fcb54e81c2d71058fc0c2257d3c8d54b8265bb"} Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.454605 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" event={"ID":"ff9d1831-83f7-46b5-a110-4ef163ec3516","Type":"ContainerDied","Data":"941f27af35a0c953f35db39fdd3915a7b6a8a0df8497752b5047f04427004124"} Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.454625 4849 scope.go:117] "RemoveContainer" containerID="afd6cb7a4933ef64975d981e62fcb54e81c2d71058fc0c2257d3c8d54b8265bb" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.454761 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.458049 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.458550 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.458745 4849 status_manager.go:851] "Failed to get status for pod" podUID="ff9d1831-83f7-46b5-a110-4ef163ec3516" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-25rtx\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.458912 4849 status_manager.go:851] "Failed to get status for pod" podUID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" pod="openshift-marketplace/redhat-marketplace-fw7ws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fw7ws\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.459502 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.471091 4849 status_manager.go:851] "Failed to get status for pod" podUID="ff9d1831-83f7-46b5-a110-4ef163ec3516" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-25rtx\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.471630 4849 status_manager.go:851] "Failed to get status for pod" podUID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" pod="openshift-marketplace/redhat-marketplace-fw7ws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fw7ws\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.471898 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.472074 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.472224 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.480541 4849 scope.go:117] "RemoveContainer" containerID="afd6cb7a4933ef64975d981e62fcb54e81c2d71058fc0c2257d3c8d54b8265bb" Dec 09 11:31:20 crc kubenswrapper[4849]: E1209 11:31:20.480967 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afd6cb7a4933ef64975d981e62fcb54e81c2d71058fc0c2257d3c8d54b8265bb\": container with ID starting with afd6cb7a4933ef64975d981e62fcb54e81c2d71058fc0c2257d3c8d54b8265bb not found: ID does not exist" containerID="afd6cb7a4933ef64975d981e62fcb54e81c2d71058fc0c2257d3c8d54b8265bb" Dec 09 11:31:20 crc kubenswrapper[4849]: I1209 11:31:20.480994 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd6cb7a4933ef64975d981e62fcb54e81c2d71058fc0c2257d3c8d54b8265bb"} err="failed to get container status \"afd6cb7a4933ef64975d981e62fcb54e81c2d71058fc0c2257d3c8d54b8265bb\": rpc error: code = NotFound desc = could not find container \"afd6cb7a4933ef64975d981e62fcb54e81c2d71058fc0c2257d3c8d54b8265bb\": container with ID starting with afd6cb7a4933ef64975d981e62fcb54e81c2d71058fc0c2257d3c8d54b8265bb not found: ID does not exist" Dec 09 11:31:21 crc kubenswrapper[4849]: I1209 11:31:21.001853 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9rq2m" Dec 09 11:31:21 crc kubenswrapper[4849]: I1209 11:31:21.002564 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:21 crc kubenswrapper[4849]: I1209 11:31:21.002931 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:21 crc kubenswrapper[4849]: I1209 11:31:21.007141 4849 status_manager.go:851] "Failed to get status for pod" podUID="ff9d1831-83f7-46b5-a110-4ef163ec3516" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-25rtx\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:21 crc kubenswrapper[4849]: I1209 11:31:21.007439 4849 status_manager.go:851] "Failed to get status for pod" podUID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" pod="openshift-marketplace/redhat-marketplace-fw7ws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fw7ws\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:21 crc kubenswrapper[4849]: I1209 11:31:21.007828 4849 status_manager.go:851] "Failed to get status for pod" podUID="591a8321-876b-43fc-a46e-9e632c31e6ad" pod="openshift-marketplace/redhat-operators-9rq2m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rq2m\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:21 crc kubenswrapper[4849]: I1209 11:31:21.008158 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:21 crc kubenswrapper[4849]: I1209 11:31:21.045849 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9rq2m" Dec 09 11:31:21 crc kubenswrapper[4849]: I1209 11:31:21.046430 4849 status_manager.go:851] "Failed to get status for pod" podUID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" pod="openshift-marketplace/redhat-marketplace-fw7ws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fw7ws\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:21 crc kubenswrapper[4849]: I1209 11:31:21.046900 4849 status_manager.go:851] "Failed to get status for pod" podUID="591a8321-876b-43fc-a46e-9e632c31e6ad" pod="openshift-marketplace/redhat-operators-9rq2m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rq2m\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:21 crc kubenswrapper[4849]: I1209 11:31:21.047309 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:21 crc kubenswrapper[4849]: I1209 11:31:21.047706 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:21 crc kubenswrapper[4849]: I1209 11:31:21.048066 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:21 crc kubenswrapper[4849]: I1209 11:31:21.048380 4849 status_manager.go:851] "Failed to get status for pod" podUID="ff9d1831-83f7-46b5-a110-4ef163ec3516" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-25rtx\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:22 crc kubenswrapper[4849]: E1209 11:31:22.165547 4849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="7s" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.472320 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.472375 4849 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d" exitCode=1 Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.472431 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d"} Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.472964 4849 scope.go:117] "RemoveContainer" containerID="1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.473998 4849 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.475789 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.476735 4849 status_manager.go:851] "Failed to get status for pod" podUID="ff9d1831-83f7-46b5-a110-4ef163ec3516" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-25rtx\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.477328 4849 status_manager.go:851] "Failed to get status for pod" podUID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" pod="openshift-marketplace/redhat-marketplace-fw7ws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fw7ws\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.477819 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.478532 4849 status_manager.go:851] "Failed to get status for pod" podUID="591a8321-876b-43fc-a46e-9e632c31e6ad" pod="openshift-marketplace/redhat-operators-9rq2m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rq2m\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.479185 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.536133 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.537545 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.537997 4849 status_manager.go:851] "Failed to get status for pod" podUID="ff9d1831-83f7-46b5-a110-4ef163ec3516" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-25rtx\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.538477 4849 status_manager.go:851] "Failed to get status for pod" podUID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" pod="openshift-marketplace/redhat-marketplace-fw7ws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fw7ws\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.538719 4849 status_manager.go:851] "Failed to get status for pod" podUID="591a8321-876b-43fc-a46e-9e632c31e6ad" pod="openshift-marketplace/redhat-operators-9rq2m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rq2m\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.538944 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.539140 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.539328 4849 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.554772 4849 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec7a78a9-b507-4a06-98c1-50d9390c6a72" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.554808 4849 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec7a78a9-b507-4a06-98c1-50d9390c6a72" Dec 09 11:31:22 crc kubenswrapper[4849]: E1209 11:31:22.555943 4849 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:22 crc kubenswrapper[4849]: I1209 11:31:22.558927 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.479238 4849 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="e9c76cd8e87a25429e9df110b615ff0cb408973eac7f7579e8f10f2cff2e842e" exitCode=0 Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.479340 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"e9c76cd8e87a25429e9df110b615ff0cb408973eac7f7579e8f10f2cff2e842e"} Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.479384 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"34e8cd32d5581f6095aa7b74eb9d391aee4a9ff5c9bda939d1cf7eb0bd120aee"} Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.479694 4849 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec7a78a9-b507-4a06-98c1-50d9390c6a72" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.479710 4849 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec7a78a9-b507-4a06-98c1-50d9390c6a72" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.480112 4849 status_manager.go:851] "Failed to get status for pod" podUID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" pod="openshift-marketplace/redhat-marketplace-fw7ws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fw7ws\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:23 crc kubenswrapper[4849]: E1209 11:31:23.480109 4849 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.480317 4849 status_manager.go:851] "Failed to get status for pod" podUID="591a8321-876b-43fc-a46e-9e632c31e6ad" pod="openshift-marketplace/redhat-operators-9rq2m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rq2m\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.480534 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.480708 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.480904 4849 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.481096 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.481278 4849 status_manager.go:851] "Failed to get status for pod" podUID="ff9d1831-83f7-46b5-a110-4ef163ec3516" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-25rtx\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.483403 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.483461 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e88f4f7ff76e246c2b843588b65bbc120ae176b1b0e87968dc9396dd39403811"} Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.484064 4849 status_manager.go:851] "Failed to get status for pod" podUID="591a8321-876b-43fc-a46e-9e632c31e6ad" pod="openshift-marketplace/redhat-operators-9rq2m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9rq2m\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.484480 4849 status_manager.go:851] "Failed to get status for pod" podUID="e6fc1b93-1648-4dea-b4ed-8eb4e307011a" pod="openshift-marketplace/certified-operators-dhpb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dhpb4\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.484812 4849 status_manager.go:851] "Failed to get status for pod" podUID="22b13fa0-7feb-45d4-8415-1834db2f96c5" pod="openshift-marketplace/community-operators-cq7jk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cq7jk\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.485094 4849 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.485362 4849 status_manager.go:851] "Failed to get status for pod" podUID="f458026c-1433-4a58-b921-1088b8e9a509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.485860 4849 status_manager.go:851] "Failed to get status for pod" podUID="ff9d1831-83f7-46b5-a110-4ef163ec3516" pod="openshift-authentication/oauth-openshift-558db77b4-25rtx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-25rtx\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:23 crc kubenswrapper[4849]: I1209 11:31:23.486202 4849 status_manager.go:851] "Failed to get status for pod" podUID="1d0053b5-2860-49fc-98d9-a9d08c9d6b19" pod="openshift-marketplace/redhat-marketplace-fw7ws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-fw7ws\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 09 11:31:24 crc kubenswrapper[4849]: I1209 11:31:24.499677 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4f523096c49d1b405208614724837ec38be398b560d126752c095baf25165036"} Dec 09 11:31:24 crc kubenswrapper[4849]: I1209 11:31:24.500201 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fc85a518a8e5ceaf777fa29ca155f24337d1029e9517aea876b05e43261e00e5"} Dec 09 11:31:24 crc kubenswrapper[4849]: I1209 11:31:24.500215 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5b442b2919c5d8f309208bad92e62aaf1df6a67bb6783c7c925b7517d0655fb8"} Dec 09 11:31:25 crc kubenswrapper[4849]: I1209 11:31:25.509375 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a163b26670b212b219ba64518747cc9edb29c82f2ad3a620a603abc28283b092"} Dec 09 11:31:25 crc kubenswrapper[4849]: I1209 11:31:25.509758 4849 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec7a78a9-b507-4a06-98c1-50d9390c6a72" Dec 09 11:31:25 crc kubenswrapper[4849]: I1209 11:31:25.509783 4849 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec7a78a9-b507-4a06-98c1-50d9390c6a72" Dec 09 11:31:25 crc kubenswrapper[4849]: I1209 11:31:25.509761 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"676ba130f3315f5b6eb4b7822abb1c935804145f60a2d0e780a80ec6b538d064"} Dec 09 11:31:25 crc kubenswrapper[4849]: I1209 11:31:25.509812 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:26 crc kubenswrapper[4849]: I1209 11:31:26.409957 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:31:26 crc kubenswrapper[4849]: I1209 11:31:26.410796 4849 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 09 11:31:26 crc kubenswrapper[4849]: I1209 11:31:26.410827 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 09 11:31:27 crc kubenswrapper[4849]: I1209 11:31:27.559213 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:27 crc kubenswrapper[4849]: I1209 11:31:27.559599 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:27 crc kubenswrapper[4849]: I1209 11:31:27.566770 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:30 crc kubenswrapper[4849]: I1209 11:31:30.109955 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:31:30 crc kubenswrapper[4849]: I1209 11:31:30.519925 4849 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:30 crc kubenswrapper[4849]: I1209 11:31:30.535572 4849 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec7a78a9-b507-4a06-98c1-50d9390c6a72" Dec 09 11:31:30 crc kubenswrapper[4849]: I1209 11:31:30.535609 4849 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec7a78a9-b507-4a06-98c1-50d9390c6a72" Dec 09 11:31:30 crc kubenswrapper[4849]: I1209 11:31:30.544444 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:30 crc kubenswrapper[4849]: I1209 11:31:30.669863 4849 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c42bb209-c3b9-44b9-83a5-a9bc186bd237" Dec 09 11:31:31 crc kubenswrapper[4849]: I1209 11:31:31.540314 4849 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec7a78a9-b507-4a06-98c1-50d9390c6a72" Dec 09 11:31:31 crc kubenswrapper[4849]: I1209 11:31:31.540352 4849 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec7a78a9-b507-4a06-98c1-50d9390c6a72" Dec 09 11:31:31 crc kubenswrapper[4849]: I1209 11:31:31.544296 4849 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c42bb209-c3b9-44b9-83a5-a9bc186bd237" Dec 09 11:31:36 crc kubenswrapper[4849]: I1209 11:31:36.410218 4849 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 09 11:31:36 crc kubenswrapper[4849]: I1209 11:31:36.410577 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 09 11:31:40 crc kubenswrapper[4849]: I1209 11:31:40.663448 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 09 11:31:41 crc kubenswrapper[4849]: I1209 11:31:41.239102 4849 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 09 11:31:41 crc kubenswrapper[4849]: I1209 11:31:41.406963 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 09 11:31:41 crc kubenswrapper[4849]: I1209 11:31:41.611204 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 09 11:31:41 crc kubenswrapper[4849]: I1209 11:31:41.613175 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 09 11:31:41 crc kubenswrapper[4849]: I1209 11:31:41.730538 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 09 11:31:41 crc kubenswrapper[4849]: I1209 11:31:41.919859 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 09 11:31:42 crc kubenswrapper[4849]: I1209 11:31:42.063033 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 09 11:31:42 crc kubenswrapper[4849]: I1209 11:31:42.312263 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 09 11:31:42 crc kubenswrapper[4849]: I1209 11:31:42.369624 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 09 11:31:42 crc kubenswrapper[4849]: I1209 11:31:42.600913 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 09 11:31:42 crc kubenswrapper[4849]: I1209 11:31:42.714892 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 09 11:31:42 crc kubenswrapper[4849]: I1209 11:31:42.742384 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 09 11:31:42 crc kubenswrapper[4849]: I1209 11:31:42.775937 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 09 11:31:42 crc kubenswrapper[4849]: I1209 11:31:42.991047 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 09 11:31:43 crc kubenswrapper[4849]: I1209 11:31:43.011867 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 09 11:31:43 crc kubenswrapper[4849]: I1209 11:31:43.025650 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 09 11:31:43 crc kubenswrapper[4849]: I1209 11:31:43.249728 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 09 11:31:43 crc kubenswrapper[4849]: I1209 11:31:43.256654 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 11:31:43 crc kubenswrapper[4849]: I1209 11:31:43.287120 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 09 11:31:43 crc kubenswrapper[4849]: I1209 11:31:43.379933 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 09 11:31:43 crc kubenswrapper[4849]: I1209 11:31:43.890111 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 09 11:31:43 crc kubenswrapper[4849]: I1209 11:31:43.909304 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 09 11:31:43 crc kubenswrapper[4849]: I1209 11:31:43.928473 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 09 11:31:43 crc kubenswrapper[4849]: I1209 11:31:43.939577 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 09 11:31:44 crc kubenswrapper[4849]: I1209 11:31:44.031914 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 09 11:31:44 crc kubenswrapper[4849]: I1209 11:31:44.100849 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 09 11:31:44 crc kubenswrapper[4849]: I1209 11:31:44.150841 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 09 11:31:44 crc kubenswrapper[4849]: I1209 11:31:44.180659 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 09 11:31:44 crc kubenswrapper[4849]: I1209 11:31:44.191355 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 09 11:31:44 crc kubenswrapper[4849]: I1209 11:31:44.369397 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 09 11:31:44 crc kubenswrapper[4849]: I1209 11:31:44.411837 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 09 11:31:44 crc kubenswrapper[4849]: I1209 11:31:44.415993 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 11:31:44 crc kubenswrapper[4849]: I1209 11:31:44.530879 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 09 11:31:44 crc kubenswrapper[4849]: I1209 11:31:44.531698 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 09 11:31:44 crc kubenswrapper[4849]: I1209 11:31:44.725095 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 09 11:31:44 crc kubenswrapper[4849]: I1209 11:31:44.778828 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 09 11:31:44 crc kubenswrapper[4849]: I1209 11:31:44.853019 4849 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 09 11:31:44 crc kubenswrapper[4849]: I1209 11:31:44.871973 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.085510 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.177951 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.208053 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.215259 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.227718 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.319561 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.332581 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.393975 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.412766 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.470956 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.520565 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.534514 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.591655 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.682432 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.764044 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.820561 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 09 11:31:45 crc kubenswrapper[4849]: I1209 11:31:45.915398 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.045712 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.113466 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.136667 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.167264 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.200068 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.240528 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.367596 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.369898 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.397035 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.410298 4849 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.410346 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.410390 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.410963 4849 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e88f4f7ff76e246c2b843588b65bbc120ae176b1b0e87968dc9396dd39403811"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.411069 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://e88f4f7ff76e246c2b843588b65bbc120ae176b1b0e87968dc9396dd39403811" gracePeriod=30 Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.416249 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.420620 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.481476 4849 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.483776 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cq7jk" podStartSLOduration=38.256168307 podStartE2EDuration="41.483757312s" podCreationTimestamp="2025-12-09 11:31:05 +0000 UTC" firstStartedPulling="2025-12-09 11:31:07.333869604 +0000 UTC m=+249.873753920" lastFinishedPulling="2025-12-09 11:31:10.561458609 +0000 UTC m=+253.101342925" observedRunningTime="2025-12-09 11:31:30.649533612 +0000 UTC m=+273.189417928" watchObservedRunningTime="2025-12-09 11:31:46.483757312 +0000 UTC m=+289.023641628" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.486895 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-25rtx"] Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.486960 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.490847 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.505431 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.505393095 podStartE2EDuration="16.505393095s" podCreationTimestamp="2025-12-09 11:31:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:31:46.502906261 +0000 UTC m=+289.042790577" watchObservedRunningTime="2025-12-09 11:31:46.505393095 +0000 UTC m=+289.045277411" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.514347 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.543576 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff9d1831-83f7-46b5-a110-4ef163ec3516" path="/var/lib/kubelet/pods/ff9d1831-83f7-46b5-a110-4ef163ec3516/volumes" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.558378 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.586288 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.626928 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.694860 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.697803 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.717211 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.735274 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.776539 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.812458 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.826671 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 09 11:31:46 crc kubenswrapper[4849]: I1209 11:31:46.924700 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 09 11:31:47 crc kubenswrapper[4849]: I1209 11:31:47.105290 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 09 11:31:47 crc kubenswrapper[4849]: I1209 11:31:47.152690 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 09 11:31:47 crc kubenswrapper[4849]: I1209 11:31:47.238895 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 11:31:47 crc kubenswrapper[4849]: I1209 11:31:47.275935 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 09 11:31:47 crc kubenswrapper[4849]: I1209 11:31:47.319247 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 09 11:31:47 crc kubenswrapper[4849]: I1209 11:31:47.406235 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 09 11:31:47 crc kubenswrapper[4849]: I1209 11:31:47.417344 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 11:31:47 crc kubenswrapper[4849]: I1209 11:31:47.421884 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 09 11:31:47 crc kubenswrapper[4849]: I1209 11:31:47.569997 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 09 11:31:47 crc kubenswrapper[4849]: I1209 11:31:47.715943 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 09 11:31:47 crc kubenswrapper[4849]: I1209 11:31:47.762277 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 09 11:31:47 crc kubenswrapper[4849]: I1209 11:31:47.829023 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 09 11:31:47 crc kubenswrapper[4849]: I1209 11:31:47.892237 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.057912 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.081688 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.095749 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.241529 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.301684 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.326269 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.479883 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.497451 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.545575 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.581098 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.616014 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.653695 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.693011 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.755426 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.797001 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.870784 4849 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.894826 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.978276 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 09 11:31:48 crc kubenswrapper[4849]: I1209 11:31:48.998726 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.052937 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.057115 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.069512 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.122458 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.189375 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.298038 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.301654 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.415379 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.461719 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.516929 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.519683 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.554150 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.607239 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.616883 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.644163 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.646760 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.652026 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.728219 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.765954 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.863001 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.929500 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.933901 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 09 11:31:49 crc kubenswrapper[4849]: I1209 11:31:49.977040 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.007328 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.028813 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.104834 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.133020 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.190924 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.207672 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.265026 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.293287 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.306058 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.382236 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.382261 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.407509 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.528720 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.554802 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.559329 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.628661 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.631798 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.670528 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.736396 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.773501 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.780622 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.819519 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 09 11:31:50 crc kubenswrapper[4849]: I1209 11:31:50.856641 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.106947 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.120123 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.279242 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.401095 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.497670 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.508863 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.533918 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.540088 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.549000 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.565540 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.615818 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.664686 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.706493 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.763253 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.827208 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.859459 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.898837 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 09 11:31:51 crc kubenswrapper[4849]: I1209 11:31:51.915153 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.002184 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.063053 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.106555 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.141165 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.150304 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.254839 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.301855 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.356515 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.375046 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.596249 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.651564 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.666725 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.787274 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.853147 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.906022 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.943960 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.945446 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 09 11:31:52 crc kubenswrapper[4849]: I1209 11:31:52.981942 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.046704 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.089721 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.127127 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.152035 4849 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.152286 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://26a1c34312400e03f34532cf6efbb16ac3e751ad0118037170e97aa3337021ab" gracePeriod=5 Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.153108 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.283297 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.286518 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.386261 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.401983 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.415976 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.521455 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.601937 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.668588 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.758032 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.820163 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.865566 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 09 11:31:53 crc kubenswrapper[4849]: I1209 11:31:53.877894 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 09 11:31:54 crc kubenswrapper[4849]: I1209 11:31:54.008565 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 09 11:31:54 crc kubenswrapper[4849]: I1209 11:31:54.020911 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 09 11:31:54 crc kubenswrapper[4849]: I1209 11:31:54.168996 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 09 11:31:54 crc kubenswrapper[4849]: I1209 11:31:54.235051 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 09 11:31:54 crc kubenswrapper[4849]: I1209 11:31:54.399705 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 09 11:31:54 crc kubenswrapper[4849]: I1209 11:31:54.486743 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 09 11:31:54 crc kubenswrapper[4849]: I1209 11:31:54.618369 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 09 11:31:54 crc kubenswrapper[4849]: I1209 11:31:54.651769 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 09 11:31:54 crc kubenswrapper[4849]: I1209 11:31:54.683991 4849 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 09 11:31:54 crc kubenswrapper[4849]: I1209 11:31:54.919943 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 09 11:31:54 crc kubenswrapper[4849]: I1209 11:31:54.943129 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 09 11:31:54 crc kubenswrapper[4849]: I1209 11:31:54.955398 4849 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 09 11:31:55 crc kubenswrapper[4849]: I1209 11:31:55.101941 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 09 11:31:55 crc kubenswrapper[4849]: I1209 11:31:55.157040 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 09 11:31:55 crc kubenswrapper[4849]: I1209 11:31:55.163420 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 09 11:31:55 crc kubenswrapper[4849]: I1209 11:31:55.313927 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 09 11:31:55 crc kubenswrapper[4849]: I1209 11:31:55.448959 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 09 11:31:55 crc kubenswrapper[4849]: I1209 11:31:55.469396 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 09 11:31:55 crc kubenswrapper[4849]: I1209 11:31:55.525747 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 09 11:31:55 crc kubenswrapper[4849]: I1209 11:31:55.529143 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 09 11:31:55 crc kubenswrapper[4849]: I1209 11:31:55.569037 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 09 11:31:55 crc kubenswrapper[4849]: I1209 11:31:55.585531 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 09 11:31:55 crc kubenswrapper[4849]: I1209 11:31:55.698959 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 09 11:31:55 crc kubenswrapper[4849]: I1209 11:31:55.904993 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 09 11:31:56 crc kubenswrapper[4849]: I1209 11:31:56.888379 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 09 11:31:56 crc kubenswrapper[4849]: I1209 11:31:56.918035 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.387152 4849 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.563180 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz"] Dec 09 11:31:58 crc kubenswrapper[4849]: E1209 11:31:58.564773 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9d1831-83f7-46b5-a110-4ef163ec3516" containerName="oauth-openshift" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.564892 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9d1831-83f7-46b5-a110-4ef163ec3516" containerName="oauth-openshift" Dec 09 11:31:58 crc kubenswrapper[4849]: E1209 11:31:58.564966 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f458026c-1433-4a58-b921-1088b8e9a509" containerName="installer" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.565027 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f458026c-1433-4a58-b921-1088b8e9a509" containerName="installer" Dec 09 11:31:58 crc kubenswrapper[4849]: E1209 11:31:58.565236 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.565323 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.565540 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f458026c-1433-4a58-b921-1088b8e9a509" containerName="installer" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.565640 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.565718 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff9d1831-83f7-46b5-a110-4ef163ec3516" containerName="oauth-openshift" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.566317 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.572600 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.572669 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.572867 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.572956 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.573029 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.573183 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.573200 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.573643 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.575703 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.576361 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.577233 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.576704 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.581361 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz"] Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.587372 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.591031 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.599975 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.704740 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.704793 4849 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="26a1c34312400e03f34532cf6efbb16ac3e751ad0118037170e97aa3337021ab" exitCode=137 Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.721051 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-audit-dir\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.721087 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-audit-policies\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.721110 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.721192 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.721219 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqklh\" (UniqueName: \"kubernetes.io/projected/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-kube-api-access-pqklh\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.721252 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.721282 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.721316 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-user-template-login\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.721335 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.721363 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.721384 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.721429 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-user-template-error\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.721466 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.721487 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-session\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.751303 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.751370 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.822550 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.822615 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.822634 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.822652 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-user-template-error\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.822674 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-session\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.822692 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.822713 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-audit-dir\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.822730 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-audit-policies\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.822744 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.822794 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.822811 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqklh\" (UniqueName: \"kubernetes.io/projected/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-kube-api-access-pqklh\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.822834 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.822860 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.822879 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-user-template-login\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.824758 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.825223 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.825314 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-audit-dir\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.826362 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.827914 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-audit-policies\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.828648 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.828715 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.828781 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-user-template-login\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.829323 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-user-template-error\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.829774 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.830664 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.833012 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-system-session\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.833319 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.847857 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqklh\" (UniqueName: \"kubernetes.io/projected/e2ca8153-13a9-4ceb-9799-3b3ca3e80f31-kube-api-access-pqklh\") pod \"oauth-openshift-64f4b9bb7f-w9ttz\" (UID: \"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.886910 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.923826 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.924138 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.924150 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.924212 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.924257 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.924302 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.924507 4849 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.924530 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.924550 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.924568 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:31:58 crc kubenswrapper[4849]: I1209 11:31:58.928756 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:31:59 crc kubenswrapper[4849]: I1209 11:31:59.025154 4849 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:59 crc kubenswrapper[4849]: I1209 11:31:59.025186 4849 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:59 crc kubenswrapper[4849]: I1209 11:31:59.025195 4849 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:59 crc kubenswrapper[4849]: I1209 11:31:59.025206 4849 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 09 11:31:59 crc kubenswrapper[4849]: I1209 11:31:59.298338 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz"] Dec 09 11:31:59 crc kubenswrapper[4849]: I1209 11:31:59.714556 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" event={"ID":"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31","Type":"ContainerStarted","Data":"98215060196f4be7c1f5f7c282af3f78e349a786a9042731727949caff751afb"} Dec 09 11:31:59 crc kubenswrapper[4849]: I1209 11:31:59.714611 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" event={"ID":"e2ca8153-13a9-4ceb-9799-3b3ca3e80f31","Type":"ContainerStarted","Data":"9de53ec7136215978667825cf7b5d6388e2e93b9066d1e892c5c0cd7f84cd4a9"} Dec 09 11:31:59 crc kubenswrapper[4849]: I1209 11:31:59.715751 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:31:59 crc kubenswrapper[4849]: I1209 11:31:59.718885 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 09 11:31:59 crc kubenswrapper[4849]: I1209 11:31:59.718966 4849 scope.go:117] "RemoveContainer" containerID="26a1c34312400e03f34532cf6efbb16ac3e751ad0118037170e97aa3337021ab" Dec 09 11:31:59 crc kubenswrapper[4849]: I1209 11:31:59.719141 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 11:31:59 crc kubenswrapper[4849]: I1209 11:31:59.740401 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" podStartSLOduration=65.740384732 podStartE2EDuration="1m5.740384732s" podCreationTimestamp="2025-12-09 11:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:31:59.738683429 +0000 UTC m=+302.278567745" watchObservedRunningTime="2025-12-09 11:31:59.740384732 +0000 UTC m=+302.280269048" Dec 09 11:31:59 crc kubenswrapper[4849]: I1209 11:31:59.750555 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w9ttz" Dec 09 11:32:00 crc kubenswrapper[4849]: I1209 11:32:00.546775 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 09 11:32:10 crc kubenswrapper[4849]: I1209 11:32:10.181207 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 09 11:32:16 crc kubenswrapper[4849]: I1209 11:32:16.819674 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 09 11:32:16 crc kubenswrapper[4849]: I1209 11:32:16.822283 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 09 11:32:16 crc kubenswrapper[4849]: I1209 11:32:16.822339 4849 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e88f4f7ff76e246c2b843588b65bbc120ae176b1b0e87968dc9396dd39403811" exitCode=137 Dec 09 11:32:16 crc kubenswrapper[4849]: I1209 11:32:16.822377 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e88f4f7ff76e246c2b843588b65bbc120ae176b1b0e87968dc9396dd39403811"} Dec 09 11:32:16 crc kubenswrapper[4849]: I1209 11:32:16.822436 4849 scope.go:117] "RemoveContainer" containerID="1770819ceeab08c8ac00a60df44bda9a4f9d6ba5fcc615b44a26c1f1581e3a8d" Dec 09 11:32:17 crc kubenswrapper[4849]: I1209 11:32:17.829611 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 09 11:32:17 crc kubenswrapper[4849]: I1209 11:32:17.830673 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e4610b8b57e5d0b53f60b99fb8c559c75894cea5509b347e6c16f737155de0ed"} Dec 09 11:32:20 crc kubenswrapper[4849]: I1209 11:32:20.109838 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:32:26 crc kubenswrapper[4849]: I1209 11:32:26.410200 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:32:26 crc kubenswrapper[4849]: I1209 11:32:26.416496 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:32:30 crc kubenswrapper[4849]: I1209 11:32:30.114650 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 11:32:38 crc kubenswrapper[4849]: I1209 11:32:38.936394 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv"] Dec 09 11:32:38 crc kubenswrapper[4849]: I1209 11:32:38.937017 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" podUID="c7d83c17-96de-4f5b-b3c0-199d7fa21fab" containerName="route-controller-manager" containerID="cri-o://d065a8744422e6b208d9ea46d212dff5986c23b689f2032b6a2f549a77a6dada" gracePeriod=30 Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.022784 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8wwr8"] Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.023266 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" podUID="29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d" containerName="controller-manager" containerID="cri-o://6a50e39aee4012c24767591c36a9d5f83c813d954642f3d192a6f30c0d3f5aca" gracePeriod=30 Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.440645 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.475838 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.501803 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-serving-cert\") pod \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.501840 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-client-ca\") pod \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.501862 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-serving-cert\") pod \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.501884 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-config\") pod \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.501905 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22s45\" (UniqueName: \"kubernetes.io/projected/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-kube-api-access-22s45\") pod \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.501927 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-config\") pod \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.501962 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-client-ca\") pod \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.501989 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lcz5\" (UniqueName: \"kubernetes.io/projected/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-kube-api-access-4lcz5\") pod \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\" (UID: \"c7d83c17-96de-4f5b-b3c0-199d7fa21fab\") " Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.502052 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-proxy-ca-bundles\") pod \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\" (UID: \"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d\") " Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.502823 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d" (UID: "29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.503211 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-config" (OuterVolumeSpecName: "config") pod "29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d" (UID: "29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.504306 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-config" (OuterVolumeSpecName: "config") pod "c7d83c17-96de-4f5b-b3c0-199d7fa21fab" (UID: "c7d83c17-96de-4f5b-b3c0-199d7fa21fab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.504843 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-client-ca" (OuterVolumeSpecName: "client-ca") pod "29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d" (UID: "29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.504884 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-client-ca" (OuterVolumeSpecName: "client-ca") pod "c7d83c17-96de-4f5b-b3c0-199d7fa21fab" (UID: "c7d83c17-96de-4f5b-b3c0-199d7fa21fab"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.513884 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-kube-api-access-22s45" (OuterVolumeSpecName: "kube-api-access-22s45") pod "29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d" (UID: "29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d"). InnerVolumeSpecName "kube-api-access-22s45". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.514185 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-kube-api-access-4lcz5" (OuterVolumeSpecName: "kube-api-access-4lcz5") pod "c7d83c17-96de-4f5b-b3c0-199d7fa21fab" (UID: "c7d83c17-96de-4f5b-b3c0-199d7fa21fab"). InnerVolumeSpecName "kube-api-access-4lcz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.514345 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d" (UID: "29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.515629 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c7d83c17-96de-4f5b-b3c0-199d7fa21fab" (UID: "c7d83c17-96de-4f5b-b3c0-199d7fa21fab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.603323 4849 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.603377 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.603396 4849 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.603440 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.603463 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.603482 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22s45\" (UniqueName: \"kubernetes.io/projected/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-kube-api-access-22s45\") on node \"crc\" DevicePath \"\"" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.603523 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.603542 4849 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.603558 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lcz5\" (UniqueName: \"kubernetes.io/projected/c7d83c17-96de-4f5b-b3c0-199d7fa21fab-kube-api-access-4lcz5\") on node \"crc\" DevicePath \"\"" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.946817 4849 generic.go:334] "Generic (PLEG): container finished" podID="c7d83c17-96de-4f5b-b3c0-199d7fa21fab" containerID="d065a8744422e6b208d9ea46d212dff5986c23b689f2032b6a2f549a77a6dada" exitCode=0 Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.946914 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" event={"ID":"c7d83c17-96de-4f5b-b3c0-199d7fa21fab","Type":"ContainerDied","Data":"d065a8744422e6b208d9ea46d212dff5986c23b689f2032b6a2f549a77a6dada"} Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.946952 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" event={"ID":"c7d83c17-96de-4f5b-b3c0-199d7fa21fab","Type":"ContainerDied","Data":"d191806d913e3b03bec4f30e057baa1f0f392bb580d3ebce5764dfe4c842b0ac"} Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.946976 4849 scope.go:117] "RemoveContainer" containerID="d065a8744422e6b208d9ea46d212dff5986c23b689f2032b6a2f549a77a6dada" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.947115 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.950789 4849 generic.go:334] "Generic (PLEG): container finished" podID="29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d" containerID="6a50e39aee4012c24767591c36a9d5f83c813d954642f3d192a6f30c0d3f5aca" exitCode=0 Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.950849 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" event={"ID":"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d","Type":"ContainerDied","Data":"6a50e39aee4012c24767591c36a9d5f83c813d954642f3d192a6f30c0d3f5aca"} Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.950876 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" event={"ID":"29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d","Type":"ContainerDied","Data":"08d60ec0e829810d1ae9e5a3cb8e6d0a7c4620140480fb63e44c5c9b28dd29cf"} Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.950950 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8wwr8" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.971113 4849 scope.go:117] "RemoveContainer" containerID="d065a8744422e6b208d9ea46d212dff5986c23b689f2032b6a2f549a77a6dada" Dec 09 11:32:39 crc kubenswrapper[4849]: E1209 11:32:39.971611 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d065a8744422e6b208d9ea46d212dff5986c23b689f2032b6a2f549a77a6dada\": container with ID starting with d065a8744422e6b208d9ea46d212dff5986c23b689f2032b6a2f549a77a6dada not found: ID does not exist" containerID="d065a8744422e6b208d9ea46d212dff5986c23b689f2032b6a2f549a77a6dada" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.971651 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d065a8744422e6b208d9ea46d212dff5986c23b689f2032b6a2f549a77a6dada"} err="failed to get container status \"d065a8744422e6b208d9ea46d212dff5986c23b689f2032b6a2f549a77a6dada\": rpc error: code = NotFound desc = could not find container \"d065a8744422e6b208d9ea46d212dff5986c23b689f2032b6a2f549a77a6dada\": container with ID starting with d065a8744422e6b208d9ea46d212dff5986c23b689f2032b6a2f549a77a6dada not found: ID does not exist" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.971681 4849 scope.go:117] "RemoveContainer" containerID="6a50e39aee4012c24767591c36a9d5f83c813d954642f3d192a6f30c0d3f5aca" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.982051 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv"] Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.986246 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5fhv"] Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.989193 4849 scope.go:117] "RemoveContainer" containerID="6a50e39aee4012c24767591c36a9d5f83c813d954642f3d192a6f30c0d3f5aca" Dec 09 11:32:39 crc kubenswrapper[4849]: E1209 11:32:39.990888 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a50e39aee4012c24767591c36a9d5f83c813d954642f3d192a6f30c0d3f5aca\": container with ID starting with 6a50e39aee4012c24767591c36a9d5f83c813d954642f3d192a6f30c0d3f5aca not found: ID does not exist" containerID="6a50e39aee4012c24767591c36a9d5f83c813d954642f3d192a6f30c0d3f5aca" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.990936 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a50e39aee4012c24767591c36a9d5f83c813d954642f3d192a6f30c0d3f5aca"} err="failed to get container status \"6a50e39aee4012c24767591c36a9d5f83c813d954642f3d192a6f30c0d3f5aca\": rpc error: code = NotFound desc = could not find container \"6a50e39aee4012c24767591c36a9d5f83c813d954642f3d192a6f30c0d3f5aca\": container with ID starting with 6a50e39aee4012c24767591c36a9d5f83c813d954642f3d192a6f30c0d3f5aca not found: ID does not exist" Dec 09 11:32:39 crc kubenswrapper[4849]: I1209 11:32:39.998872 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8wwr8"] Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.008279 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8wwr8"] Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.544026 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d" path="/var/lib/kubelet/pods/29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d/volumes" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.544901 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7d83c17-96de-4f5b-b3c0-199d7fa21fab" path="/var/lib/kubelet/pods/c7d83c17-96de-4f5b-b3c0-199d7fa21fab/volumes" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.607149 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2rstj"] Dec 09 11:32:40 crc kubenswrapper[4849]: E1209 11:32:40.607480 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d" containerName="controller-manager" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.607497 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d" containerName="controller-manager" Dec 09 11:32:40 crc kubenswrapper[4849]: E1209 11:32:40.607505 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7d83c17-96de-4f5b-b3c0-199d7fa21fab" containerName="route-controller-manager" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.607511 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7d83c17-96de-4f5b-b3c0-199d7fa21fab" containerName="route-controller-manager" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.607625 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7d83c17-96de-4f5b-b3c0-199d7fa21fab" containerName="route-controller-manager" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.607636 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d697ea-5188-4dc7-9bc7-68ebf3ee2d4d" containerName="controller-manager" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.608059 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.611355 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.611581 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.611638 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.612513 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.612681 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.612967 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb"] Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.613339 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.613940 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.615824 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.616586 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.617205 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.617453 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.617593 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.617731 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.620664 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.645513 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb"] Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.654564 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2rstj"] Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.722674 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z67h7\" (UniqueName: \"kubernetes.io/projected/4d2583a8-36b4-470d-97a7-9b66991ee823-kube-api-access-z67h7\") pod \"route-controller-manager-76946b564d-zcmjb\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.722741 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d2583a8-36b4-470d-97a7-9b66991ee823-client-ca\") pod \"route-controller-manager-76946b564d-zcmjb\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.722794 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-2rstj\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.722938 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ddqm\" (UniqueName: \"kubernetes.io/projected/f1033053-4e9b-42ba-8d51-0171689ed7b0-kube-api-access-2ddqm\") pod \"controller-manager-d6f97d578-2rstj\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.722973 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1033053-4e9b-42ba-8d51-0171689ed7b0-serving-cert\") pod \"controller-manager-d6f97d578-2rstj\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.722998 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d2583a8-36b4-470d-97a7-9b66991ee823-serving-cert\") pod \"route-controller-manager-76946b564d-zcmjb\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.723039 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-client-ca\") pod \"controller-manager-d6f97d578-2rstj\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.723176 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d2583a8-36b4-470d-97a7-9b66991ee823-config\") pod \"route-controller-manager-76946b564d-zcmjb\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.723313 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-config\") pod \"controller-manager-d6f97d578-2rstj\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.824520 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-client-ca\") pod \"controller-manager-d6f97d578-2rstj\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.824710 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d2583a8-36b4-470d-97a7-9b66991ee823-config\") pod \"route-controller-manager-76946b564d-zcmjb\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.824743 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-config\") pod \"controller-manager-d6f97d578-2rstj\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.824787 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z67h7\" (UniqueName: \"kubernetes.io/projected/4d2583a8-36b4-470d-97a7-9b66991ee823-kube-api-access-z67h7\") pod \"route-controller-manager-76946b564d-zcmjb\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.824867 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d2583a8-36b4-470d-97a7-9b66991ee823-client-ca\") pod \"route-controller-manager-76946b564d-zcmjb\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.824935 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-2rstj\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.825063 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ddqm\" (UniqueName: \"kubernetes.io/projected/f1033053-4e9b-42ba-8d51-0171689ed7b0-kube-api-access-2ddqm\") pod \"controller-manager-d6f97d578-2rstj\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.825125 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1033053-4e9b-42ba-8d51-0171689ed7b0-serving-cert\") pod \"controller-manager-d6f97d578-2rstj\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.825164 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d2583a8-36b4-470d-97a7-9b66991ee823-serving-cert\") pod \"route-controller-manager-76946b564d-zcmjb\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.825385 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-client-ca\") pod \"controller-manager-d6f97d578-2rstj\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.827105 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d2583a8-36b4-470d-97a7-9b66991ee823-config\") pod \"route-controller-manager-76946b564d-zcmjb\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.827254 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d2583a8-36b4-470d-97a7-9b66991ee823-client-ca\") pod \"route-controller-manager-76946b564d-zcmjb\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.827675 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-2rstj\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.827768 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-config\") pod \"controller-manager-d6f97d578-2rstj\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.831716 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d2583a8-36b4-470d-97a7-9b66991ee823-serving-cert\") pod \"route-controller-manager-76946b564d-zcmjb\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.835699 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1033053-4e9b-42ba-8d51-0171689ed7b0-serving-cert\") pod \"controller-manager-d6f97d578-2rstj\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.851466 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z67h7\" (UniqueName: \"kubernetes.io/projected/4d2583a8-36b4-470d-97a7-9b66991ee823-kube-api-access-z67h7\") pod \"route-controller-manager-76946b564d-zcmjb\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.853678 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ddqm\" (UniqueName: \"kubernetes.io/projected/f1033053-4e9b-42ba-8d51-0171689ed7b0-kube-api-access-2ddqm\") pod \"controller-manager-d6f97d578-2rstj\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.926701 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:40 crc kubenswrapper[4849]: I1209 11:32:40.934633 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:41 crc kubenswrapper[4849]: I1209 11:32:41.254996 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb"] Dec 09 11:32:42 crc kubenswrapper[4849]: I1209 11:32:42.992457 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2rstj"] Dec 09 11:32:43 crc kubenswrapper[4849]: I1209 11:32:43.905230 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" event={"ID":"f1033053-4e9b-42ba-8d51-0171689ed7b0","Type":"ContainerStarted","Data":"42d42cc4c5f0310925490a799cf350ffd4348057992660d6ffffb6ae40e4ba9a"} Dec 09 11:32:43 crc kubenswrapper[4849]: I1209 11:32:43.905715 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:43 crc kubenswrapper[4849]: I1209 11:32:43.905727 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" event={"ID":"f1033053-4e9b-42ba-8d51-0171689ed7b0","Type":"ContainerStarted","Data":"4d52a1b5a601a19ff89bea030747066ec5829a22c7ccd11c03b8555c1c15b226"} Dec 09 11:32:43 crc kubenswrapper[4849]: I1209 11:32:43.906828 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" event={"ID":"4d2583a8-36b4-470d-97a7-9b66991ee823","Type":"ContainerStarted","Data":"4df00692e1c52de28ca98a0ff3d1ea6f7721abf7eec4aaece600801652a2b8c1"} Dec 09 11:32:43 crc kubenswrapper[4849]: I1209 11:32:43.906858 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" event={"ID":"4d2583a8-36b4-470d-97a7-9b66991ee823","Type":"ContainerStarted","Data":"4ce60f466fd6b07abc23584eaff66a4a464497e244d8522df8b214e3d8524da2"} Dec 09 11:32:43 crc kubenswrapper[4849]: I1209 11:32:43.907246 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:43 crc kubenswrapper[4849]: I1209 11:32:43.918526 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:32:43 crc kubenswrapper[4849]: I1209 11:32:43.919855 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:32:43 crc kubenswrapper[4849]: I1209 11:32:43.926179 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" podStartSLOduration=4.926168737 podStartE2EDuration="4.926168737s" podCreationTimestamp="2025-12-09 11:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:32:43.923863008 +0000 UTC m=+346.463747324" watchObservedRunningTime="2025-12-09 11:32:43.926168737 +0000 UTC m=+346.466053053" Dec 09 11:32:51 crc kubenswrapper[4849]: I1209 11:32:51.132760 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:32:51 crc kubenswrapper[4849]: I1209 11:32:51.133112 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.094469 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" podStartSLOduration=43.094449706 podStartE2EDuration="43.094449706s" podCreationTimestamp="2025-12-09 11:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:32:43.978046917 +0000 UTC m=+346.517931233" watchObservedRunningTime="2025-12-09 11:33:21.094449706 +0000 UTC m=+383.634334022" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.095511 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-kr6c7"] Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.096256 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.117459 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-kr6c7"] Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.132820 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.132891 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.152193 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32ff827f-fdc0-4e0c-80d4-547dbb770d32-ca-trust-extracted\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.152233 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32ff827f-fdc0-4e0c-80d4-547dbb770d32-bound-sa-token\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.152261 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.152293 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32ff827f-fdc0-4e0c-80d4-547dbb770d32-trusted-ca\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.152311 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32ff827f-fdc0-4e0c-80d4-547dbb770d32-installation-pull-secrets\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.152326 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rpcb\" (UniqueName: \"kubernetes.io/projected/32ff827f-fdc0-4e0c-80d4-547dbb770d32-kube-api-access-7rpcb\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.152345 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32ff827f-fdc0-4e0c-80d4-547dbb770d32-registry-certificates\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.152548 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32ff827f-fdc0-4e0c-80d4-547dbb770d32-registry-tls\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.175092 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.252990 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32ff827f-fdc0-4e0c-80d4-547dbb770d32-trusted-ca\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.253031 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32ff827f-fdc0-4e0c-80d4-547dbb770d32-installation-pull-secrets\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.253048 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rpcb\" (UniqueName: \"kubernetes.io/projected/32ff827f-fdc0-4e0c-80d4-547dbb770d32-kube-api-access-7rpcb\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.253063 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32ff827f-fdc0-4e0c-80d4-547dbb770d32-registry-certificates\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.253102 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32ff827f-fdc0-4e0c-80d4-547dbb770d32-registry-tls\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.253131 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32ff827f-fdc0-4e0c-80d4-547dbb770d32-ca-trust-extracted\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.253145 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32ff827f-fdc0-4e0c-80d4-547dbb770d32-bound-sa-token\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.253809 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32ff827f-fdc0-4e0c-80d4-547dbb770d32-ca-trust-extracted\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.254607 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32ff827f-fdc0-4e0c-80d4-547dbb770d32-trusted-ca\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.256347 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32ff827f-fdc0-4e0c-80d4-547dbb770d32-registry-certificates\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.258781 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32ff827f-fdc0-4e0c-80d4-547dbb770d32-installation-pull-secrets\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.259910 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32ff827f-fdc0-4e0c-80d4-547dbb770d32-registry-tls\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.267552 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rpcb\" (UniqueName: \"kubernetes.io/projected/32ff827f-fdc0-4e0c-80d4-547dbb770d32-kube-api-access-7rpcb\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.268720 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32ff827f-fdc0-4e0c-80d4-547dbb770d32-bound-sa-token\") pod \"image-registry-66df7c8f76-kr6c7\" (UID: \"32ff827f-fdc0-4e0c-80d4-547dbb770d32\") " pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.412131 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:21 crc kubenswrapper[4849]: I1209 11:33:21.802705 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-kr6c7"] Dec 09 11:33:22 crc kubenswrapper[4849]: I1209 11:33:22.186235 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" event={"ID":"32ff827f-fdc0-4e0c-80d4-547dbb770d32","Type":"ContainerStarted","Data":"b422f700a31f97ce9393138626adfa43ac36b3f127025f8f5682707748598cdc"} Dec 09 11:33:22 crc kubenswrapper[4849]: I1209 11:33:22.186603 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" event={"ID":"32ff827f-fdc0-4e0c-80d4-547dbb770d32","Type":"ContainerStarted","Data":"073e91733bd52729a3ac6d0145a00126419b276b4c9ca076186e37f77563452f"} Dec 09 11:33:22 crc kubenswrapper[4849]: I1209 11:33:22.186666 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:22 crc kubenswrapper[4849]: I1209 11:33:22.213183 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" podStartSLOduration=1.213158869 podStartE2EDuration="1.213158869s" podCreationTimestamp="2025-12-09 11:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:33:22.208225153 +0000 UTC m=+384.748109489" watchObservedRunningTime="2025-12-09 11:33:22.213158869 +0000 UTC m=+384.753043195" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.163262 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2rstj"] Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.164673 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" podUID="f1033053-4e9b-42ba-8d51-0171689ed7b0" containerName="controller-manager" containerID="cri-o://42d42cc4c5f0310925490a799cf350ffd4348057992660d6ffffb6ae40e4ba9a" gracePeriod=30 Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.257784 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb"] Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.258025 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" podUID="4d2583a8-36b4-470d-97a7-9b66991ee823" containerName="route-controller-manager" containerID="cri-o://4df00692e1c52de28ca98a0ff3d1ea6f7721abf7eec4aaece600801652a2b8c1" gracePeriod=30 Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.660469 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.710817 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.817788 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-client-ca\") pod \"f1033053-4e9b-42ba-8d51-0171689ed7b0\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.817876 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d2583a8-36b4-470d-97a7-9b66991ee823-config\") pod \"4d2583a8-36b4-470d-97a7-9b66991ee823\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.817914 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1033053-4e9b-42ba-8d51-0171689ed7b0-serving-cert\") pod \"f1033053-4e9b-42ba-8d51-0171689ed7b0\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.817944 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d2583a8-36b4-470d-97a7-9b66991ee823-client-ca\") pod \"4d2583a8-36b4-470d-97a7-9b66991ee823\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.817986 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ddqm\" (UniqueName: \"kubernetes.io/projected/f1033053-4e9b-42ba-8d51-0171689ed7b0-kube-api-access-2ddqm\") pod \"f1033053-4e9b-42ba-8d51-0171689ed7b0\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.818037 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-config\") pod \"f1033053-4e9b-42ba-8d51-0171689ed7b0\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.818062 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-proxy-ca-bundles\") pod \"f1033053-4e9b-42ba-8d51-0171689ed7b0\" (UID: \"f1033053-4e9b-42ba-8d51-0171689ed7b0\") " Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.818124 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d2583a8-36b4-470d-97a7-9b66991ee823-serving-cert\") pod \"4d2583a8-36b4-470d-97a7-9b66991ee823\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.818168 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z67h7\" (UniqueName: \"kubernetes.io/projected/4d2583a8-36b4-470d-97a7-9b66991ee823-kube-api-access-z67h7\") pod \"4d2583a8-36b4-470d-97a7-9b66991ee823\" (UID: \"4d2583a8-36b4-470d-97a7-9b66991ee823\") " Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.820201 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d2583a8-36b4-470d-97a7-9b66991ee823-client-ca" (OuterVolumeSpecName: "client-ca") pod "4d2583a8-36b4-470d-97a7-9b66991ee823" (UID: "4d2583a8-36b4-470d-97a7-9b66991ee823"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.820881 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-client-ca" (OuterVolumeSpecName: "client-ca") pod "f1033053-4e9b-42ba-8d51-0171689ed7b0" (UID: "f1033053-4e9b-42ba-8d51-0171689ed7b0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.821383 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d2583a8-36b4-470d-97a7-9b66991ee823-config" (OuterVolumeSpecName: "config") pod "4d2583a8-36b4-470d-97a7-9b66991ee823" (UID: "4d2583a8-36b4-470d-97a7-9b66991ee823"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.822845 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f1033053-4e9b-42ba-8d51-0171689ed7b0" (UID: "f1033053-4e9b-42ba-8d51-0171689ed7b0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.823378 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-config" (OuterVolumeSpecName: "config") pod "f1033053-4e9b-42ba-8d51-0171689ed7b0" (UID: "f1033053-4e9b-42ba-8d51-0171689ed7b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.827324 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2583a8-36b4-470d-97a7-9b66991ee823-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4d2583a8-36b4-470d-97a7-9b66991ee823" (UID: "4d2583a8-36b4-470d-97a7-9b66991ee823"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.827398 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1033053-4e9b-42ba-8d51-0171689ed7b0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f1033053-4e9b-42ba-8d51-0171689ed7b0" (UID: "f1033053-4e9b-42ba-8d51-0171689ed7b0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.827430 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1033053-4e9b-42ba-8d51-0171689ed7b0-kube-api-access-2ddqm" (OuterVolumeSpecName: "kube-api-access-2ddqm") pod "f1033053-4e9b-42ba-8d51-0171689ed7b0" (UID: "f1033053-4e9b-42ba-8d51-0171689ed7b0"). InnerVolumeSpecName "kube-api-access-2ddqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.828085 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2583a8-36b4-470d-97a7-9b66991ee823-kube-api-access-z67h7" (OuterVolumeSpecName: "kube-api-access-z67h7") pod "4d2583a8-36b4-470d-97a7-9b66991ee823" (UID: "4d2583a8-36b4-470d-97a7-9b66991ee823"). InnerVolumeSpecName "kube-api-access-z67h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.919978 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z67h7\" (UniqueName: \"kubernetes.io/projected/4d2583a8-36b4-470d-97a7-9b66991ee823-kube-api-access-z67h7\") on node \"crc\" DevicePath \"\"" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.920023 4849 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.920039 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d2583a8-36b4-470d-97a7-9b66991ee823-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.920051 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1033053-4e9b-42ba-8d51-0171689ed7b0-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.920063 4849 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d2583a8-36b4-470d-97a7-9b66991ee823-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.920073 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ddqm\" (UniqueName: \"kubernetes.io/projected/f1033053-4e9b-42ba-8d51-0171689ed7b0-kube-api-access-2ddqm\") on node \"crc\" DevicePath \"\"" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.920084 4849 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.920099 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1033053-4e9b-42ba-8d51-0171689ed7b0-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:33:25 crc kubenswrapper[4849]: I1209 11:33:25.920110 4849 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d2583a8-36b4-470d-97a7-9b66991ee823-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.207581 4849 generic.go:334] "Generic (PLEG): container finished" podID="4d2583a8-36b4-470d-97a7-9b66991ee823" containerID="4df00692e1c52de28ca98a0ff3d1ea6f7721abf7eec4aaece600801652a2b8c1" exitCode=0 Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.207654 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" event={"ID":"4d2583a8-36b4-470d-97a7-9b66991ee823","Type":"ContainerDied","Data":"4df00692e1c52de28ca98a0ff3d1ea6f7721abf7eec4aaece600801652a2b8c1"} Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.207677 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.207712 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb" event={"ID":"4d2583a8-36b4-470d-97a7-9b66991ee823","Type":"ContainerDied","Data":"4ce60f466fd6b07abc23584eaff66a4a464497e244d8522df8b214e3d8524da2"} Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.207738 4849 scope.go:117] "RemoveContainer" containerID="4df00692e1c52de28ca98a0ff3d1ea6f7721abf7eec4aaece600801652a2b8c1" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.212686 4849 generic.go:334] "Generic (PLEG): container finished" podID="f1033053-4e9b-42ba-8d51-0171689ed7b0" containerID="42d42cc4c5f0310925490a799cf350ffd4348057992660d6ffffb6ae40e4ba9a" exitCode=0 Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.212741 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.212741 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" event={"ID":"f1033053-4e9b-42ba-8d51-0171689ed7b0","Type":"ContainerDied","Data":"42d42cc4c5f0310925490a799cf350ffd4348057992660d6ffffb6ae40e4ba9a"} Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.212985 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-2rstj" event={"ID":"f1033053-4e9b-42ba-8d51-0171689ed7b0","Type":"ContainerDied","Data":"4d52a1b5a601a19ff89bea030747066ec5829a22c7ccd11c03b8555c1c15b226"} Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.239232 4849 scope.go:117] "RemoveContainer" containerID="4df00692e1c52de28ca98a0ff3d1ea6f7721abf7eec4aaece600801652a2b8c1" Dec 09 11:33:26 crc kubenswrapper[4849]: E1209 11:33:26.242105 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4df00692e1c52de28ca98a0ff3d1ea6f7721abf7eec4aaece600801652a2b8c1\": container with ID starting with 4df00692e1c52de28ca98a0ff3d1ea6f7721abf7eec4aaece600801652a2b8c1 not found: ID does not exist" containerID="4df00692e1c52de28ca98a0ff3d1ea6f7721abf7eec4aaece600801652a2b8c1" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.242161 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df00692e1c52de28ca98a0ff3d1ea6f7721abf7eec4aaece600801652a2b8c1"} err="failed to get container status \"4df00692e1c52de28ca98a0ff3d1ea6f7721abf7eec4aaece600801652a2b8c1\": rpc error: code = NotFound desc = could not find container \"4df00692e1c52de28ca98a0ff3d1ea6f7721abf7eec4aaece600801652a2b8c1\": container with ID starting with 4df00692e1c52de28ca98a0ff3d1ea6f7721abf7eec4aaece600801652a2b8c1 not found: ID does not exist" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.242191 4849 scope.go:117] "RemoveContainer" containerID="42d42cc4c5f0310925490a799cf350ffd4348057992660d6ffffb6ae40e4ba9a" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.252326 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb"] Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.259599 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-zcmjb"] Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.262133 4849 scope.go:117] "RemoveContainer" containerID="42d42cc4c5f0310925490a799cf350ffd4348057992660d6ffffb6ae40e4ba9a" Dec 09 11:33:26 crc kubenswrapper[4849]: E1209 11:33:26.262615 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42d42cc4c5f0310925490a799cf350ffd4348057992660d6ffffb6ae40e4ba9a\": container with ID starting with 42d42cc4c5f0310925490a799cf350ffd4348057992660d6ffffb6ae40e4ba9a not found: ID does not exist" containerID="42d42cc4c5f0310925490a799cf350ffd4348057992660d6ffffb6ae40e4ba9a" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.262670 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d42cc4c5f0310925490a799cf350ffd4348057992660d6ffffb6ae40e4ba9a"} err="failed to get container status \"42d42cc4c5f0310925490a799cf350ffd4348057992660d6ffffb6ae40e4ba9a\": rpc error: code = NotFound desc = could not find container \"42d42cc4c5f0310925490a799cf350ffd4348057992660d6ffffb6ae40e4ba9a\": container with ID starting with 42d42cc4c5f0310925490a799cf350ffd4348057992660d6ffffb6ae40e4ba9a not found: ID does not exist" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.262992 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2rstj"] Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.265751 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2rstj"] Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.544092 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d2583a8-36b4-470d-97a7-9b66991ee823" path="/var/lib/kubelet/pods/4d2583a8-36b4-470d-97a7-9b66991ee823/volumes" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.545170 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1033053-4e9b-42ba-8d51-0171689ed7b0" path="/var/lib/kubelet/pods/f1033053-4e9b-42ba-8d51-0171689ed7b0/volumes" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.637698 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9"] Dec 09 11:33:26 crc kubenswrapper[4849]: E1209 11:33:26.637953 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1033053-4e9b-42ba-8d51-0171689ed7b0" containerName="controller-manager" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.637972 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1033053-4e9b-42ba-8d51-0171689ed7b0" containerName="controller-manager" Dec 09 11:33:26 crc kubenswrapper[4849]: E1209 11:33:26.637997 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2583a8-36b4-470d-97a7-9b66991ee823" containerName="route-controller-manager" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.638005 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2583a8-36b4-470d-97a7-9b66991ee823" containerName="route-controller-manager" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.638119 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d2583a8-36b4-470d-97a7-9b66991ee823" containerName="route-controller-manager" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.638138 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1033053-4e9b-42ba-8d51-0171689ed7b0" containerName="controller-manager" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.638530 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.642162 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s"] Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.642943 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.646268 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.646642 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.646926 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.648840 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.650587 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.651177 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.651424 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.652116 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.652197 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.652133 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.655671 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.658020 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9"] Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.658328 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.662442 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.675256 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s"] Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.729136 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f76c50ed-6d9b-45d4-961c-1c39dff25f92-proxy-ca-bundles\") pod \"controller-manager-95d7bcb4b-8rbj9\" (UID: \"f76c50ed-6d9b-45d4-961c-1c39dff25f92\") " pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.729216 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vz2h\" (UniqueName: \"kubernetes.io/projected/f76c50ed-6d9b-45d4-961c-1c39dff25f92-kube-api-access-6vz2h\") pod \"controller-manager-95d7bcb4b-8rbj9\" (UID: \"f76c50ed-6d9b-45d4-961c-1c39dff25f92\") " pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.729284 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f76c50ed-6d9b-45d4-961c-1c39dff25f92-config\") pod \"controller-manager-95d7bcb4b-8rbj9\" (UID: \"f76c50ed-6d9b-45d4-961c-1c39dff25f92\") " pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.729322 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76c50ed-6d9b-45d4-961c-1c39dff25f92-serving-cert\") pod \"controller-manager-95d7bcb4b-8rbj9\" (UID: \"f76c50ed-6d9b-45d4-961c-1c39dff25f92\") " pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.729352 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f76c50ed-6d9b-45d4-961c-1c39dff25f92-client-ca\") pod \"controller-manager-95d7bcb4b-8rbj9\" (UID: \"f76c50ed-6d9b-45d4-961c-1c39dff25f92\") " pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.830746 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f76c50ed-6d9b-45d4-961c-1c39dff25f92-config\") pod \"controller-manager-95d7bcb4b-8rbj9\" (UID: \"f76c50ed-6d9b-45d4-961c-1c39dff25f92\") " pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.830812 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btggc\" (UniqueName: \"kubernetes.io/projected/a25abaac-ac29-4644-8946-cb773246e397-kube-api-access-btggc\") pod \"route-controller-manager-5d86cb577d-fqp5s\" (UID: \"a25abaac-ac29-4644-8946-cb773246e397\") " pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.830855 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76c50ed-6d9b-45d4-961c-1c39dff25f92-serving-cert\") pod \"controller-manager-95d7bcb4b-8rbj9\" (UID: \"f76c50ed-6d9b-45d4-961c-1c39dff25f92\") " pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.830888 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f76c50ed-6d9b-45d4-961c-1c39dff25f92-client-ca\") pod \"controller-manager-95d7bcb4b-8rbj9\" (UID: \"f76c50ed-6d9b-45d4-961c-1c39dff25f92\") " pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.830930 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a25abaac-ac29-4644-8946-cb773246e397-config\") pod \"route-controller-manager-5d86cb577d-fqp5s\" (UID: \"a25abaac-ac29-4644-8946-cb773246e397\") " pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.830971 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f76c50ed-6d9b-45d4-961c-1c39dff25f92-proxy-ca-bundles\") pod \"controller-manager-95d7bcb4b-8rbj9\" (UID: \"f76c50ed-6d9b-45d4-961c-1c39dff25f92\") " pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.831014 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a25abaac-ac29-4644-8946-cb773246e397-client-ca\") pod \"route-controller-manager-5d86cb577d-fqp5s\" (UID: \"a25abaac-ac29-4644-8946-cb773246e397\") " pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.831071 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vz2h\" (UniqueName: \"kubernetes.io/projected/f76c50ed-6d9b-45d4-961c-1c39dff25f92-kube-api-access-6vz2h\") pod \"controller-manager-95d7bcb4b-8rbj9\" (UID: \"f76c50ed-6d9b-45d4-961c-1c39dff25f92\") " pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.831093 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a25abaac-ac29-4644-8946-cb773246e397-serving-cert\") pod \"route-controller-manager-5d86cb577d-fqp5s\" (UID: \"a25abaac-ac29-4644-8946-cb773246e397\") " pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.832093 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f76c50ed-6d9b-45d4-961c-1c39dff25f92-client-ca\") pod \"controller-manager-95d7bcb4b-8rbj9\" (UID: \"f76c50ed-6d9b-45d4-961c-1c39dff25f92\") " pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.832255 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f76c50ed-6d9b-45d4-961c-1c39dff25f92-proxy-ca-bundles\") pod \"controller-manager-95d7bcb4b-8rbj9\" (UID: \"f76c50ed-6d9b-45d4-961c-1c39dff25f92\") " pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.832745 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f76c50ed-6d9b-45d4-961c-1c39dff25f92-config\") pod \"controller-manager-95d7bcb4b-8rbj9\" (UID: \"f76c50ed-6d9b-45d4-961c-1c39dff25f92\") " pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.835577 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76c50ed-6d9b-45d4-961c-1c39dff25f92-serving-cert\") pod \"controller-manager-95d7bcb4b-8rbj9\" (UID: \"f76c50ed-6d9b-45d4-961c-1c39dff25f92\") " pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.849169 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vz2h\" (UniqueName: \"kubernetes.io/projected/f76c50ed-6d9b-45d4-961c-1c39dff25f92-kube-api-access-6vz2h\") pod \"controller-manager-95d7bcb4b-8rbj9\" (UID: \"f76c50ed-6d9b-45d4-961c-1c39dff25f92\") " pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.932854 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a25abaac-ac29-4644-8946-cb773246e397-config\") pod \"route-controller-manager-5d86cb577d-fqp5s\" (UID: \"a25abaac-ac29-4644-8946-cb773246e397\") " pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.932958 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a25abaac-ac29-4644-8946-cb773246e397-client-ca\") pod \"route-controller-manager-5d86cb577d-fqp5s\" (UID: \"a25abaac-ac29-4644-8946-cb773246e397\") " pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.932983 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a25abaac-ac29-4644-8946-cb773246e397-serving-cert\") pod \"route-controller-manager-5d86cb577d-fqp5s\" (UID: \"a25abaac-ac29-4644-8946-cb773246e397\") " pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.933020 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btggc\" (UniqueName: \"kubernetes.io/projected/a25abaac-ac29-4644-8946-cb773246e397-kube-api-access-btggc\") pod \"route-controller-manager-5d86cb577d-fqp5s\" (UID: \"a25abaac-ac29-4644-8946-cb773246e397\") " pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.934823 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a25abaac-ac29-4644-8946-cb773246e397-client-ca\") pod \"route-controller-manager-5d86cb577d-fqp5s\" (UID: \"a25abaac-ac29-4644-8946-cb773246e397\") " pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.936166 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a25abaac-ac29-4644-8946-cb773246e397-config\") pod \"route-controller-manager-5d86cb577d-fqp5s\" (UID: \"a25abaac-ac29-4644-8946-cb773246e397\") " pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.939343 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a25abaac-ac29-4644-8946-cb773246e397-serving-cert\") pod \"route-controller-manager-5d86cb577d-fqp5s\" (UID: \"a25abaac-ac29-4644-8946-cb773246e397\") " pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.950322 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btggc\" (UniqueName: \"kubernetes.io/projected/a25abaac-ac29-4644-8946-cb773246e397-kube-api-access-btggc\") pod \"route-controller-manager-5d86cb577d-fqp5s\" (UID: \"a25abaac-ac29-4644-8946-cb773246e397\") " pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.956941 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:26 crc kubenswrapper[4849]: I1209 11:33:26.965209 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:27 crc kubenswrapper[4849]: I1209 11:33:27.162883 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9"] Dec 09 11:33:27 crc kubenswrapper[4849]: I1209 11:33:27.200324 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s"] Dec 09 11:33:27 crc kubenswrapper[4849]: W1209 11:33:27.209200 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25abaac_ac29_4644_8946_cb773246e397.slice/crio-e6664dd2d0d5e8588e3ec9c04bfedb7c1b93c22c7aa506930556f0ca9819c7f1 WatchSource:0}: Error finding container e6664dd2d0d5e8588e3ec9c04bfedb7c1b93c22c7aa506930556f0ca9819c7f1: Status 404 returned error can't find the container with id e6664dd2d0d5e8588e3ec9c04bfedb7c1b93c22c7aa506930556f0ca9819c7f1 Dec 09 11:33:27 crc kubenswrapper[4849]: I1209 11:33:27.226862 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" event={"ID":"a25abaac-ac29-4644-8946-cb773246e397","Type":"ContainerStarted","Data":"e6664dd2d0d5e8588e3ec9c04bfedb7c1b93c22c7aa506930556f0ca9819c7f1"} Dec 09 11:33:27 crc kubenswrapper[4849]: I1209 11:33:27.237666 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" event={"ID":"f76c50ed-6d9b-45d4-961c-1c39dff25f92","Type":"ContainerStarted","Data":"a34e99e4046d66f516d9ec0a81c9570386a515019793eba4177bbe659c1723d8"} Dec 09 11:33:28 crc kubenswrapper[4849]: I1209 11:33:28.244071 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" event={"ID":"a25abaac-ac29-4644-8946-cb773246e397","Type":"ContainerStarted","Data":"65a8036bf5fadd2c12c666a4234c376c8e4185d20ccfc7e1ee9b9cc23962adb9"} Dec 09 11:33:28 crc kubenswrapper[4849]: I1209 11:33:28.244442 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:28 crc kubenswrapper[4849]: I1209 11:33:28.246003 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" event={"ID":"f76c50ed-6d9b-45d4-961c-1c39dff25f92","Type":"ContainerStarted","Data":"d3a33bd7a8722302053cebc7fcc9952c12a1a54dd513f923175390540d6b13e0"} Dec 09 11:33:28 crc kubenswrapper[4849]: I1209 11:33:28.246452 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:28 crc kubenswrapper[4849]: I1209 11:33:28.251051 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" Dec 09 11:33:28 crc kubenswrapper[4849]: I1209 11:33:28.252885 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" Dec 09 11:33:28 crc kubenswrapper[4849]: I1209 11:33:28.264231 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d86cb577d-fqp5s" podStartSLOduration=3.264215182 podStartE2EDuration="3.264215182s" podCreationTimestamp="2025-12-09 11:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:33:28.263432053 +0000 UTC m=+390.803316419" watchObservedRunningTime="2025-12-09 11:33:28.264215182 +0000 UTC m=+390.804099498" Dec 09 11:33:28 crc kubenswrapper[4849]: I1209 11:33:28.322705 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-95d7bcb4b-8rbj9" podStartSLOduration=3.322683772 podStartE2EDuration="3.322683772s" podCreationTimestamp="2025-12-09 11:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:33:28.32107053 +0000 UTC m=+390.860954846" watchObservedRunningTime="2025-12-09 11:33:28.322683772 +0000 UTC m=+390.862568098" Dec 09 11:33:41 crc kubenswrapper[4849]: I1209 11:33:41.417998 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-kr6c7" Dec 09 11:33:41 crc kubenswrapper[4849]: I1209 11:33:41.482997 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lhhqf"] Dec 09 11:33:51 crc kubenswrapper[4849]: I1209 11:33:51.133495 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:33:51 crc kubenswrapper[4849]: I1209 11:33:51.133963 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:33:51 crc kubenswrapper[4849]: I1209 11:33:51.134007 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:33:51 crc kubenswrapper[4849]: I1209 11:33:51.134625 4849 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"579c1698bf3789148ad5988a944ebf95e9935ab2868988359a420af98bca3008"} pod="openshift-machine-config-operator/machine-config-daemon-89kpx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 11:33:51 crc kubenswrapper[4849]: I1209 11:33:51.134672 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" containerID="cri-o://579c1698bf3789148ad5988a944ebf95e9935ab2868988359a420af98bca3008" gracePeriod=600 Dec 09 11:33:51 crc kubenswrapper[4849]: I1209 11:33:51.377185 4849 generic.go:334] "Generic (PLEG): container finished" podID="157c6f6c-042b-4da3-934e-a08474e56486" containerID="579c1698bf3789148ad5988a944ebf95e9935ab2868988359a420af98bca3008" exitCode=0 Dec 09 11:33:51 crc kubenswrapper[4849]: I1209 11:33:51.377384 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerDied","Data":"579c1698bf3789148ad5988a944ebf95e9935ab2868988359a420af98bca3008"} Dec 09 11:33:51 crc kubenswrapper[4849]: I1209 11:33:51.377705 4849 scope.go:117] "RemoveContainer" containerID="e14dc076578eb51eb58940d27670ae7dba910d9fa007ddb6fbc57212c61a9b71" Dec 09 11:33:52 crc kubenswrapper[4849]: I1209 11:33:52.385073 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerStarted","Data":"9bf575ce487faa87fad2e90da46de12216b3b9187fb59a7d04f81930ece3edc9"} Dec 09 11:34:06 crc kubenswrapper[4849]: I1209 11:34:06.520473 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" podUID="ca549b95-b862-43e6-8540-595d05555d3c" containerName="registry" containerID="cri-o://ede20529f6934427bfb4605ee3dc029a92b0dabc8ad5d4da47af7bd293b2b770" gracePeriod=30 Dec 09 11:34:06 crc kubenswrapper[4849]: I1209 11:34:06.961725 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.126509 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ca549b95-b862-43e6-8540-595d05555d3c-installation-pull-secrets\") pod \"ca549b95-b862-43e6-8540-595d05555d3c\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.126627 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-registry-tls\") pod \"ca549b95-b862-43e6-8540-595d05555d3c\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.126658 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-bound-sa-token\") pod \"ca549b95-b862-43e6-8540-595d05555d3c\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.126691 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lgdg\" (UniqueName: \"kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-kube-api-access-9lgdg\") pod \"ca549b95-b862-43e6-8540-595d05555d3c\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.126717 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ca549b95-b862-43e6-8540-595d05555d3c-registry-certificates\") pod \"ca549b95-b862-43e6-8540-595d05555d3c\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.126887 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ca549b95-b862-43e6-8540-595d05555d3c\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.126922 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ca549b95-b862-43e6-8540-595d05555d3c-ca-trust-extracted\") pod \"ca549b95-b862-43e6-8540-595d05555d3c\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.126980 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca549b95-b862-43e6-8540-595d05555d3c-trusted-ca\") pod \"ca549b95-b862-43e6-8540-595d05555d3c\" (UID: \"ca549b95-b862-43e6-8540-595d05555d3c\") " Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.127958 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca549b95-b862-43e6-8540-595d05555d3c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ca549b95-b862-43e6-8540-595d05555d3c" (UID: "ca549b95-b862-43e6-8540-595d05555d3c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.128558 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca549b95-b862-43e6-8540-595d05555d3c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ca549b95-b862-43e6-8540-595d05555d3c" (UID: "ca549b95-b862-43e6-8540-595d05555d3c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.134058 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ca549b95-b862-43e6-8540-595d05555d3c" (UID: "ca549b95-b862-43e6-8540-595d05555d3c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.134642 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-kube-api-access-9lgdg" (OuterVolumeSpecName: "kube-api-access-9lgdg") pod "ca549b95-b862-43e6-8540-595d05555d3c" (UID: "ca549b95-b862-43e6-8540-595d05555d3c"). InnerVolumeSpecName "kube-api-access-9lgdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.135266 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ca549b95-b862-43e6-8540-595d05555d3c" (UID: "ca549b95-b862-43e6-8540-595d05555d3c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.139472 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ca549b95-b862-43e6-8540-595d05555d3c" (UID: "ca549b95-b862-43e6-8540-595d05555d3c"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.140979 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca549b95-b862-43e6-8540-595d05555d3c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ca549b95-b862-43e6-8540-595d05555d3c" (UID: "ca549b95-b862-43e6-8540-595d05555d3c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.149858 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca549b95-b862-43e6-8540-595d05555d3c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ca549b95-b862-43e6-8540-595d05555d3c" (UID: "ca549b95-b862-43e6-8540-595d05555d3c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.228234 4849 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.228290 4849 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.228305 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lgdg\" (UniqueName: \"kubernetes.io/projected/ca549b95-b862-43e6-8540-595d05555d3c-kube-api-access-9lgdg\") on node \"crc\" DevicePath \"\"" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.228320 4849 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ca549b95-b862-43e6-8540-595d05555d3c-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.228352 4849 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ca549b95-b862-43e6-8540-595d05555d3c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.228368 4849 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca549b95-b862-43e6-8540-595d05555d3c-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.228382 4849 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ca549b95-b862-43e6-8540-595d05555d3c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.465319 4849 generic.go:334] "Generic (PLEG): container finished" podID="ca549b95-b862-43e6-8540-595d05555d3c" containerID="ede20529f6934427bfb4605ee3dc029a92b0dabc8ad5d4da47af7bd293b2b770" exitCode=0 Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.465378 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" event={"ID":"ca549b95-b862-43e6-8540-595d05555d3c","Type":"ContainerDied","Data":"ede20529f6934427bfb4605ee3dc029a92b0dabc8ad5d4da47af7bd293b2b770"} Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.465400 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.465735 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lhhqf" event={"ID":"ca549b95-b862-43e6-8540-595d05555d3c","Type":"ContainerDied","Data":"0ac7e0521152da869cdb6f2e787d8f8f6a42dca9295fd5765bbc921fe8e9afd3"} Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.465763 4849 scope.go:117] "RemoveContainer" containerID="ede20529f6934427bfb4605ee3dc029a92b0dabc8ad5d4da47af7bd293b2b770" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.485007 4849 scope.go:117] "RemoveContainer" containerID="ede20529f6934427bfb4605ee3dc029a92b0dabc8ad5d4da47af7bd293b2b770" Dec 09 11:34:07 crc kubenswrapper[4849]: E1209 11:34:07.485698 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ede20529f6934427bfb4605ee3dc029a92b0dabc8ad5d4da47af7bd293b2b770\": container with ID starting with ede20529f6934427bfb4605ee3dc029a92b0dabc8ad5d4da47af7bd293b2b770 not found: ID does not exist" containerID="ede20529f6934427bfb4605ee3dc029a92b0dabc8ad5d4da47af7bd293b2b770" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.485760 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ede20529f6934427bfb4605ee3dc029a92b0dabc8ad5d4da47af7bd293b2b770"} err="failed to get container status \"ede20529f6934427bfb4605ee3dc029a92b0dabc8ad5d4da47af7bd293b2b770\": rpc error: code = NotFound desc = could not find container \"ede20529f6934427bfb4605ee3dc029a92b0dabc8ad5d4da47af7bd293b2b770\": container with ID starting with ede20529f6934427bfb4605ee3dc029a92b0dabc8ad5d4da47af7bd293b2b770 not found: ID does not exist" Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.497398 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lhhqf"] Dec 09 11:34:07 crc kubenswrapper[4849]: I1209 11:34:07.504787 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lhhqf"] Dec 09 11:34:08 crc kubenswrapper[4849]: I1209 11:34:08.545966 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca549b95-b862-43e6-8540-595d05555d3c" path="/var/lib/kubelet/pods/ca549b95-b862-43e6-8540-595d05555d3c/volumes" Dec 09 11:35:51 crc kubenswrapper[4849]: I1209 11:35:51.132604 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:35:51 crc kubenswrapper[4849]: I1209 11:35:51.133157 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:36:21 crc kubenswrapper[4849]: I1209 11:36:21.133325 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:36:21 crc kubenswrapper[4849]: I1209 11:36:21.134092 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:36:51 crc kubenswrapper[4849]: I1209 11:36:51.132687 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:36:51 crc kubenswrapper[4849]: I1209 11:36:51.133347 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:36:51 crc kubenswrapper[4849]: I1209 11:36:51.133397 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:36:51 crc kubenswrapper[4849]: I1209 11:36:51.133895 4849 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9bf575ce487faa87fad2e90da46de12216b3b9187fb59a7d04f81930ece3edc9"} pod="openshift-machine-config-operator/machine-config-daemon-89kpx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 11:36:51 crc kubenswrapper[4849]: I1209 11:36:51.133947 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" containerID="cri-o://9bf575ce487faa87fad2e90da46de12216b3b9187fb59a7d04f81930ece3edc9" gracePeriod=600 Dec 09 11:36:51 crc kubenswrapper[4849]: I1209 11:36:51.567855 4849 generic.go:334] "Generic (PLEG): container finished" podID="157c6f6c-042b-4da3-934e-a08474e56486" containerID="9bf575ce487faa87fad2e90da46de12216b3b9187fb59a7d04f81930ece3edc9" exitCode=0 Dec 09 11:36:51 crc kubenswrapper[4849]: I1209 11:36:51.567938 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerDied","Data":"9bf575ce487faa87fad2e90da46de12216b3b9187fb59a7d04f81930ece3edc9"} Dec 09 11:36:51 crc kubenswrapper[4849]: I1209 11:36:51.568254 4849 scope.go:117] "RemoveContainer" containerID="579c1698bf3789148ad5988a944ebf95e9935ab2868988359a420af98bca3008" Dec 09 11:36:52 crc kubenswrapper[4849]: I1209 11:36:52.576370 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerStarted","Data":"048beac97f1401b80a7107cf946bd8ac882621de80936787f6987e142986bbe4"} Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.701228 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-cm7gh"] Dec 09 11:38:28 crc kubenswrapper[4849]: E1209 11:38:28.702128 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca549b95-b862-43e6-8540-595d05555d3c" containerName="registry" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.702145 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca549b95-b862-43e6-8540-595d05555d3c" containerName="registry" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.702293 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca549b95-b862-43e6-8540-595d05555d3c" containerName="registry" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.702784 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-cm7gh" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.710127 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.714008 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.716007 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-cm7gh"] Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.717542 4849 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-ql7pc" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.730457 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-zrdxp"] Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.731890 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-zrdxp" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.740749 4849 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zbtzr" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.786466 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-n6p9q"] Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.787358 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-n6p9q" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.791915 4849 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-hgbhf" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.798702 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l49rr\" (UniqueName: \"kubernetes.io/projected/6ace19a4-a4eb-40fa-af3a-b08a2590c64f-kube-api-access-l49rr\") pod \"cert-manager-cainjector-7f985d654d-cm7gh\" (UID: \"6ace19a4-a4eb-40fa-af3a-b08a2590c64f\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-cm7gh" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.798764 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rmf5\" (UniqueName: \"kubernetes.io/projected/ef0763a6-1232-4f65-a803-50ed551a126a-kube-api-access-7rmf5\") pod \"cert-manager-5b446d88c5-zrdxp\" (UID: \"ef0763a6-1232-4f65-a803-50ed551a126a\") " pod="cert-manager/cert-manager-5b446d88c5-zrdxp" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.803975 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-zrdxp"] Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.815116 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-n6p9q"] Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.900156 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k47q6\" (UniqueName: \"kubernetes.io/projected/9c0dd8aa-7e1e-4af8-aa67-b371f6215b98-kube-api-access-k47q6\") pod \"cert-manager-webhook-5655c58dd6-n6p9q\" (UID: \"9c0dd8aa-7e1e-4af8-aa67-b371f6215b98\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-n6p9q" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.900243 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l49rr\" (UniqueName: \"kubernetes.io/projected/6ace19a4-a4eb-40fa-af3a-b08a2590c64f-kube-api-access-l49rr\") pod \"cert-manager-cainjector-7f985d654d-cm7gh\" (UID: \"6ace19a4-a4eb-40fa-af3a-b08a2590c64f\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-cm7gh" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.900273 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rmf5\" (UniqueName: \"kubernetes.io/projected/ef0763a6-1232-4f65-a803-50ed551a126a-kube-api-access-7rmf5\") pod \"cert-manager-5b446d88c5-zrdxp\" (UID: \"ef0763a6-1232-4f65-a803-50ed551a126a\") " pod="cert-manager/cert-manager-5b446d88c5-zrdxp" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.924675 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rmf5\" (UniqueName: \"kubernetes.io/projected/ef0763a6-1232-4f65-a803-50ed551a126a-kube-api-access-7rmf5\") pod \"cert-manager-5b446d88c5-zrdxp\" (UID: \"ef0763a6-1232-4f65-a803-50ed551a126a\") " pod="cert-manager/cert-manager-5b446d88c5-zrdxp" Dec 09 11:38:28 crc kubenswrapper[4849]: I1209 11:38:28.924824 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l49rr\" (UniqueName: \"kubernetes.io/projected/6ace19a4-a4eb-40fa-af3a-b08a2590c64f-kube-api-access-l49rr\") pod \"cert-manager-cainjector-7f985d654d-cm7gh\" (UID: \"6ace19a4-a4eb-40fa-af3a-b08a2590c64f\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-cm7gh" Dec 09 11:38:29 crc kubenswrapper[4849]: I1209 11:38:29.001054 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k47q6\" (UniqueName: \"kubernetes.io/projected/9c0dd8aa-7e1e-4af8-aa67-b371f6215b98-kube-api-access-k47q6\") pod \"cert-manager-webhook-5655c58dd6-n6p9q\" (UID: \"9c0dd8aa-7e1e-4af8-aa67-b371f6215b98\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-n6p9q" Dec 09 11:38:29 crc kubenswrapper[4849]: I1209 11:38:29.017108 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k47q6\" (UniqueName: \"kubernetes.io/projected/9c0dd8aa-7e1e-4af8-aa67-b371f6215b98-kube-api-access-k47q6\") pod \"cert-manager-webhook-5655c58dd6-n6p9q\" (UID: \"9c0dd8aa-7e1e-4af8-aa67-b371f6215b98\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-n6p9q" Dec 09 11:38:29 crc kubenswrapper[4849]: I1209 11:38:29.032732 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-cm7gh" Dec 09 11:38:29 crc kubenswrapper[4849]: I1209 11:38:29.054875 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-zrdxp" Dec 09 11:38:29 crc kubenswrapper[4849]: I1209 11:38:29.108348 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-n6p9q" Dec 09 11:38:29 crc kubenswrapper[4849]: I1209 11:38:29.283338 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-cm7gh"] Dec 09 11:38:29 crc kubenswrapper[4849]: I1209 11:38:29.291377 4849 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 11:38:29 crc kubenswrapper[4849]: I1209 11:38:29.329890 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-zrdxp"] Dec 09 11:38:29 crc kubenswrapper[4849]: I1209 11:38:29.643717 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-n6p9q"] Dec 09 11:38:30 crc kubenswrapper[4849]: I1209 11:38:30.154045 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-cm7gh" event={"ID":"6ace19a4-a4eb-40fa-af3a-b08a2590c64f","Type":"ContainerStarted","Data":"c274e346c0901b602392e137fdc43d9170e827205a4802e5cad6774690d034af"} Dec 09 11:38:30 crc kubenswrapper[4849]: I1209 11:38:30.155530 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-n6p9q" event={"ID":"9c0dd8aa-7e1e-4af8-aa67-b371f6215b98","Type":"ContainerStarted","Data":"2ed2bca681239dd70f8f0778a1751f14775a4b61b04fdfa022881f2ece8e6a51"} Dec 09 11:38:30 crc kubenswrapper[4849]: I1209 11:38:30.156395 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-zrdxp" event={"ID":"ef0763a6-1232-4f65-a803-50ed551a126a","Type":"ContainerStarted","Data":"20e2a02b96868921ae6fd972e3d04c3e9c442422d57136e0b9671fda47fe0f9c"} Dec 09 11:38:34 crc kubenswrapper[4849]: I1209 11:38:34.179171 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-n6p9q" event={"ID":"9c0dd8aa-7e1e-4af8-aa67-b371f6215b98","Type":"ContainerStarted","Data":"b9d4a9388ed4f88df3194af8bb68fd79a1724ecd762cea61b852cf6522dee0e8"} Dec 09 11:38:34 crc kubenswrapper[4849]: I1209 11:38:34.180513 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-n6p9q" Dec 09 11:38:34 crc kubenswrapper[4849]: I1209 11:38:34.182014 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-zrdxp" event={"ID":"ef0763a6-1232-4f65-a803-50ed551a126a","Type":"ContainerStarted","Data":"f5126e3774776b4f135861faacb1755a943cb6d7083b3dfc418b92d5594ca1df"} Dec 09 11:38:34 crc kubenswrapper[4849]: I1209 11:38:34.183236 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-cm7gh" event={"ID":"6ace19a4-a4eb-40fa-af3a-b08a2590c64f","Type":"ContainerStarted","Data":"f22149481da2bb482c5c090a6e102db2c2b3dd56e208326659da44496c3fba94"} Dec 09 11:38:34 crc kubenswrapper[4849]: I1209 11:38:34.194789 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-n6p9q" podStartSLOduration=2.599627403 podStartE2EDuration="6.194773085s" podCreationTimestamp="2025-12-09 11:38:28 +0000 UTC" firstStartedPulling="2025-12-09 11:38:29.651256556 +0000 UTC m=+692.191140872" lastFinishedPulling="2025-12-09 11:38:33.246402238 +0000 UTC m=+695.786286554" observedRunningTime="2025-12-09 11:38:34.193490273 +0000 UTC m=+696.733374589" watchObservedRunningTime="2025-12-09 11:38:34.194773085 +0000 UTC m=+696.734657391" Dec 09 11:38:34 crc kubenswrapper[4849]: I1209 11:38:34.210014 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-cm7gh" podStartSLOduration=2.316331614 podStartE2EDuration="6.209985682s" podCreationTimestamp="2025-12-09 11:38:28 +0000 UTC" firstStartedPulling="2025-12-09 11:38:29.291148014 +0000 UTC m=+691.831032330" lastFinishedPulling="2025-12-09 11:38:33.184802082 +0000 UTC m=+695.724686398" observedRunningTime="2025-12-09 11:38:34.207777117 +0000 UTC m=+696.747661433" watchObservedRunningTime="2025-12-09 11:38:34.209985682 +0000 UTC m=+696.749869998" Dec 09 11:38:34 crc kubenswrapper[4849]: I1209 11:38:34.224795 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-zrdxp" podStartSLOduration=2.379288653 podStartE2EDuration="6.224777978s" podCreationTimestamp="2025-12-09 11:38:28 +0000 UTC" firstStartedPulling="2025-12-09 11:38:29.340844695 +0000 UTC m=+691.880729011" lastFinishedPulling="2025-12-09 11:38:33.18633402 +0000 UTC m=+695.726218336" observedRunningTime="2025-12-09 11:38:34.222860691 +0000 UTC m=+696.762745027" watchObservedRunningTime="2025-12-09 11:38:34.224777978 +0000 UTC m=+696.764662294" Dec 09 11:38:39 crc kubenswrapper[4849]: I1209 11:38:39.111207 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-n6p9q" Dec 09 11:38:51 crc kubenswrapper[4849]: I1209 11:38:51.133063 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:38:51 crc kubenswrapper[4849]: I1209 11:38:51.133837 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:38:57 crc kubenswrapper[4849]: I1209 11:38:57.243670 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6hf97"] Dec 09 11:38:57 crc kubenswrapper[4849]: I1209 11:38:57.244420 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovn-controller" containerID="cri-o://7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5" gracePeriod=30 Dec 09 11:38:57 crc kubenswrapper[4849]: I1209 11:38:57.244493 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="nbdb" containerID="cri-o://fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc" gracePeriod=30 Dec 09 11:38:57 crc kubenswrapper[4849]: I1209 11:38:57.244570 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="northd" containerID="cri-o://e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3" gracePeriod=30 Dec 09 11:38:57 crc kubenswrapper[4849]: I1209 11:38:57.244624 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90" gracePeriod=30 Dec 09 11:38:57 crc kubenswrapper[4849]: I1209 11:38:57.244670 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="kube-rbac-proxy-node" containerID="cri-o://13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3" gracePeriod=30 Dec 09 11:38:57 crc kubenswrapper[4849]: I1209 11:38:57.244711 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovn-acl-logging" containerID="cri-o://dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090" gracePeriod=30 Dec 09 11:38:57 crc kubenswrapper[4849]: I1209 11:38:57.244962 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="sbdb" containerID="cri-o://691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e" gracePeriod=30 Dec 09 11:38:57 crc kubenswrapper[4849]: I1209 11:38:57.323544 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" containerID="cri-o://df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602" gracePeriod=30 Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.133085 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/3.log" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.134896 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovn-acl-logging/0.log" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.135358 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovn-controller/0.log" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.135734 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.192681 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gwqc4"] Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.192911 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="kube-rbac-proxy-node" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.192924 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="kube-rbac-proxy-node" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.192933 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.192941 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.192953 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="sbdb" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.192959 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="sbdb" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.192969 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovn-acl-logging" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.192974 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovn-acl-logging" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.192983 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="nbdb" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.192988 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="nbdb" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.192996 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193001 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.193010 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="kubecfg-setup" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193051 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="kubecfg-setup" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.193061 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="northd" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193068 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="northd" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.193078 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovn-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193084 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovn-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.193096 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193124 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.193134 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193140 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193286 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193296 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="northd" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193307 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="nbdb" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193314 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193321 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovn-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193335 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193361 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193368 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="sbdb" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193375 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovn-acl-logging" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193384 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="kube-rbac-proxy-node" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193392 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.193556 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193566 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.193575 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193581 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.193670 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerName="ovnkube-controller" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.195180 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.320626 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h76bl_e5c6e29f-6131-4daa-b297-81eb53e7384c/kube-multus/2.log" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321260 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-env-overrides\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321286 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321301 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-run-netns\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321318 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-systemd-units\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321333 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-cni-bin\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321355 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovnkube-config\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321358 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h76bl_e5c6e29f-6131-4daa-b297-81eb53e7384c/kube-multus/1.log" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321394 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jm22\" (UniqueName: \"kubernetes.io/projected/205e41c5-82b8-4bac-a27a-49f1e0da94e5-kube-api-access-5jm22\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321391 4849 generic.go:334] "Generic (PLEG): container finished" podID="e5c6e29f-6131-4daa-b297-81eb53e7384c" containerID="ebf4aaa40d1d01e3c26b272ee565c54370454d5bf20e9cf2c3c36076426c1c4d" exitCode=2 Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321437 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-slash\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321461 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-ovn\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321485 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-kubelet\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321500 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321510 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-systemd\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321528 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-node-log\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321548 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovnkube-script-lib\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321563 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-cni-netd\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321577 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-log-socket\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321599 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-var-lib-openvswitch\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321628 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-openvswitch\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321644 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovn-node-metrics-cert\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321663 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-run-ovn-kubernetes\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321677 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-etc-openvswitch\") pod \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\" (UID: \"205e41c5-82b8-4bac-a27a-49f1e0da94e5\") " Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321790 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-run-netns\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321817 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-systemd-units\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321836 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9mrc\" (UniqueName: \"kubernetes.io/projected/8c4dd07c-5697-42ce-aede-929ba820f840-kube-api-access-f9mrc\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321853 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321870 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-cni-netd\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321884 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-log-socket\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321906 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-cni-bin\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321928 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-run-openvswitch\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321943 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c4dd07c-5697-42ce-aede-929ba820f840-env-overrides\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321958 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c4dd07c-5697-42ce-aede-929ba820f840-ovn-node-metrics-cert\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321973 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-kubelet\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321996 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-etc-openvswitch\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322015 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-node-log\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322029 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c4dd07c-5697-42ce-aede-929ba820f840-ovnkube-script-lib\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322059 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-run-systemd\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322073 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-var-lib-openvswitch\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322088 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-slash\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322102 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-run-ovn-kubernetes\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322116 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c4dd07c-5697-42ce-aede-929ba820f840-ovnkube-config\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322138 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-run-ovn\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322171 4849 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321457 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h76bl" event={"ID":"e5c6e29f-6131-4daa-b297-81eb53e7384c","Type":"ContainerDied","Data":"ebf4aaa40d1d01e3c26b272ee565c54370454d5bf20e9cf2c3c36076426c1c4d"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322566 4849 scope.go:117] "RemoveContainer" containerID="954600766ab4dd73fd7ff676e1ff4e6e53acdc03033e3f96d03582f2b268e54b" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321852 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321871 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321885 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321901 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.321921 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322205 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322219 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322503 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-slash" (OuterVolumeSpecName: "host-slash") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322526 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322571 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322588 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322651 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322780 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.322809 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-node-log" (OuterVolumeSpecName: "node-log") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.323082 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-log-socket" (OuterVolumeSpecName: "log-socket") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.323220 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.323707 4849 scope.go:117] "RemoveContainer" containerID="ebf4aaa40d1d01e3c26b272ee565c54370454d5bf20e9cf2c3c36076426c1c4d" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.323928 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-h76bl_openshift-multus(e5c6e29f-6131-4daa-b297-81eb53e7384c)\"" pod="openshift-multus/multus-h76bl" podUID="e5c6e29f-6131-4daa-b297-81eb53e7384c" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.326276 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovnkube-controller/3.log" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.328356 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205e41c5-82b8-4bac-a27a-49f1e0da94e5-kube-api-access-5jm22" (OuterVolumeSpecName: "kube-api-access-5jm22") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "kube-api-access-5jm22". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.328718 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovn-acl-logging/0.log" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.330259 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6hf97_205e41c5-82b8-4bac-a27a-49f1e0da94e5/ovn-controller/0.log" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.330728 4849 generic.go:334] "Generic (PLEG): container finished" podID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerID="df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602" exitCode=0 Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.330838 4849 generic.go:334] "Generic (PLEG): container finished" podID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerID="691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e" exitCode=0 Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.330902 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.330909 4849 generic.go:334] "Generic (PLEG): container finished" podID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerID="fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc" exitCode=0 Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.331099 4849 generic.go:334] "Generic (PLEG): container finished" podID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerID="e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3" exitCode=0 Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.331173 4849 generic.go:334] "Generic (PLEG): container finished" podID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerID="1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90" exitCode=0 Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.331247 4849 generic.go:334] "Generic (PLEG): container finished" podID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerID="13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3" exitCode=0 Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.331315 4849 generic.go:334] "Generic (PLEG): container finished" podID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerID="dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090" exitCode=143 Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.331399 4849 generic.go:334] "Generic (PLEG): container finished" podID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" containerID="7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5" exitCode=143 Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.331333 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerDied","Data":"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.331593 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerDied","Data":"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.331680 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerDied","Data":"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.331775 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerDied","Data":"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.331848 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerDied","Data":"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.331917 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerDied","Data":"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.331991 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.332069 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.332145 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.332215 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.332285 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.332362 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.332534 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.332622 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.332689 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.332752 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.332824 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerDied","Data":"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.332897 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.332963 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.333038 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.333109 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.333168 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.333227 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.333287 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.333366 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.333461 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.333532 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.333601 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerDied","Data":"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.333675 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.333753 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.333824 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.333884 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.333947 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.334014 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.334079 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.334142 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.334207 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.334267 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.334334 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6hf97" event={"ID":"205e41c5-82b8-4bac-a27a-49f1e0da94e5","Type":"ContainerDied","Data":"62b7e5b3ecf19025402a615a9915c157769ded09a0e0621db3b71c90fd21c5b7"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.334401 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.334494 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.334562 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.334624 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.334683 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.334746 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.334856 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.334925 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.335019 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.335082 4849 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4"} Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.337307 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.343646 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "205e41c5-82b8-4bac-a27a-49f1e0da94e5" (UID: "205e41c5-82b8-4bac-a27a-49f1e0da94e5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.422959 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-etc-openvswitch\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.423124 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-etc-openvswitch\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.423243 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-node-log\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.423319 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-node-log\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.423377 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c4dd07c-5697-42ce-aede-929ba820f840-ovnkube-script-lib\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.423553 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-run-systemd\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.424051 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c4dd07c-5697-42ce-aede-929ba820f840-ovnkube-script-lib\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.423405 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-run-systemd\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.424124 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-var-lib-openvswitch\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.424143 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-slash\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.424157 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-run-ovn-kubernetes\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.424271 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c4dd07c-5697-42ce-aede-929ba820f840-ovnkube-config\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.424224 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-slash\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.424230 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-run-ovn-kubernetes\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.424197 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-var-lib-openvswitch\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.424499 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-run-ovn\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.424787 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c4dd07c-5697-42ce-aede-929ba820f840-ovnkube-config\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.424824 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-run-ovn\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.424887 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-run-netns\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425034 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-run-netns\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425098 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-systemd-units\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425124 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9mrc\" (UniqueName: \"kubernetes.io/projected/8c4dd07c-5697-42ce-aede-929ba820f840-kube-api-access-f9mrc\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425173 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-systemd-units\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425208 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425229 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-cni-netd\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425270 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425308 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-cni-netd\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425371 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-log-socket\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425536 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-log-socket\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425578 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-cni-bin\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425611 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-run-openvswitch\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425652 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-cni-bin\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425687 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-run-openvswitch\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425697 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c4dd07c-5697-42ce-aede-929ba820f840-env-overrides\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425725 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c4dd07c-5697-42ce-aede-929ba820f840-ovn-node-metrics-cert\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425755 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-kubelet\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425834 4849 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-slash\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425850 4849 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425861 4849 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425871 4849 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425880 4849 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-node-log\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425889 4849 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425898 4849 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425906 4849 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-log-socket\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425915 4849 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425925 4849 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425939 4849 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425952 4849 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425963 4849 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425977 4849 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.425988 4849 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.426001 4849 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.426014 4849 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/205e41c5-82b8-4bac-a27a-49f1e0da94e5-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.426026 4849 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/205e41c5-82b8-4bac-a27a-49f1e0da94e5-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.426036 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jm22\" (UniqueName: \"kubernetes.io/projected/205e41c5-82b8-4bac-a27a-49f1e0da94e5-kube-api-access-5jm22\") on node \"crc\" DevicePath \"\"" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.426064 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c4dd07c-5697-42ce-aede-929ba820f840-host-kubelet\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.426078 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c4dd07c-5697-42ce-aede-929ba820f840-env-overrides\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.432940 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c4dd07c-5697-42ce-aede-929ba820f840-ovn-node-metrics-cert\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.441061 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9mrc\" (UniqueName: \"kubernetes.io/projected/8c4dd07c-5697-42ce-aede-929ba820f840-kube-api-access-f9mrc\") pod \"ovnkube-node-gwqc4\" (UID: \"8c4dd07c-5697-42ce-aede-929ba820f840\") " pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.515398 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.517619 4849 scope.go:117] "RemoveContainer" containerID="df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.534597 4849 scope.go:117] "RemoveContainer" containerID="780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.584890 4849 scope.go:117] "RemoveContainer" containerID="691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.606865 4849 scope.go:117] "RemoveContainer" containerID="fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.633912 4849 scope.go:117] "RemoveContainer" containerID="e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.668834 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6hf97"] Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.675446 4849 scope.go:117] "RemoveContainer" containerID="1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.681758 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6hf97"] Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.691057 4849 scope.go:117] "RemoveContainer" containerID="13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.705224 4849 scope.go:117] "RemoveContainer" containerID="dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.777018 4849 scope.go:117] "RemoveContainer" containerID="7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.794355 4849 scope.go:117] "RemoveContainer" containerID="36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.804775 4849 scope.go:117] "RemoveContainer" containerID="36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.808524 4849 scope.go:117] "RemoveContainer" containerID="df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.809223 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602\": container with ID starting with df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602 not found: ID does not exist" containerID="df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.809313 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602"} err="failed to get container status \"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602\": rpc error: code = NotFound desc = could not find container \"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602\": container with ID starting with df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.809390 4849 scope.go:117] "RemoveContainer" containerID="780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.809586 4849 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kubecfg-setup_ovnkube-node-6hf97_openshift-ovn-kubernetes_205e41c5-82b8-4bac-a27a-49f1e0da94e5_0 in pod sandbox 62b7e5b3ecf19025402a615a9915c157769ded09a0e0621db3b71c90fd21c5b7 from index: no such id: '36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4'" containerID="36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.809664 4849 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_kubecfg-setup_ovnkube-node-6hf97_openshift-ovn-kubernetes_205e41c5-82b8-4bac-a27a-49f1e0da94e5_0 in pod sandbox 62b7e5b3ecf19025402a615a9915c157769ded09a0e0621db3b71c90fd21c5b7 from index: no such id: '36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4'" containerID="36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.809837 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9\": container with ID starting with 780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9 not found: ID does not exist" containerID="780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.809917 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9"} err="failed to get container status \"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9\": rpc error: code = NotFound desc = could not find container \"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9\": container with ID starting with 780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.809982 4849 scope.go:117] "RemoveContainer" containerID="691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.811731 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\": container with ID starting with 691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e not found: ID does not exist" containerID="691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.811774 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e"} err="failed to get container status \"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\": rpc error: code = NotFound desc = could not find container \"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\": container with ID starting with 691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.811803 4849 scope.go:117] "RemoveContainer" containerID="fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.812176 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\": container with ID starting with fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc not found: ID does not exist" containerID="fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.812198 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc"} err="failed to get container status \"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\": rpc error: code = NotFound desc = could not find container \"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\": container with ID starting with fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.812212 4849 scope.go:117] "RemoveContainer" containerID="e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.812526 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\": container with ID starting with e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3 not found: ID does not exist" containerID="e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.812548 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3"} err="failed to get container status \"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\": rpc error: code = NotFound desc = could not find container \"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\": container with ID starting with e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.812562 4849 scope.go:117] "RemoveContainer" containerID="1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.812825 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\": container with ID starting with 1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90 not found: ID does not exist" containerID="1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.812856 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90"} err="failed to get container status \"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\": rpc error: code = NotFound desc = could not find container \"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\": container with ID starting with 1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.812873 4849 scope.go:117] "RemoveContainer" containerID="13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.813453 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\": container with ID starting with 13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3 not found: ID does not exist" containerID="13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.813474 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3"} err="failed to get container status \"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\": rpc error: code = NotFound desc = could not find container \"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\": container with ID starting with 13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.813486 4849 scope.go:117] "RemoveContainer" containerID="dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.814102 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\": container with ID starting with dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090 not found: ID does not exist" containerID="dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.814122 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090"} err="failed to get container status \"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\": rpc error: code = NotFound desc = could not find container \"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\": container with ID starting with dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.814135 4849 scope.go:117] "RemoveContainer" containerID="7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.814304 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\": container with ID starting with 7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5 not found: ID does not exist" containerID="7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.814319 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5"} err="failed to get container status \"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\": rpc error: code = NotFound desc = could not find container \"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\": container with ID starting with 7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.814330 4849 scope.go:117] "RemoveContainer" containerID="36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4" Dec 09 11:38:58 crc kubenswrapper[4849]: E1209 11:38:58.814510 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\": container with ID starting with 36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4 not found: ID does not exist" containerID="36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.814527 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4"} err="failed to get container status \"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\": rpc error: code = NotFound desc = could not find container \"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\": container with ID starting with 36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.814538 4849 scope.go:117] "RemoveContainer" containerID="df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.815095 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602"} err="failed to get container status \"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602\": rpc error: code = NotFound desc = could not find container \"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602\": container with ID starting with df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.815142 4849 scope.go:117] "RemoveContainer" containerID="780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.815934 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9"} err="failed to get container status \"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9\": rpc error: code = NotFound desc = could not find container \"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9\": container with ID starting with 780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.815959 4849 scope.go:117] "RemoveContainer" containerID="691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.816494 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e"} err="failed to get container status \"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\": rpc error: code = NotFound desc = could not find container \"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\": container with ID starting with 691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.816511 4849 scope.go:117] "RemoveContainer" containerID="fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.816942 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc"} err="failed to get container status \"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\": rpc error: code = NotFound desc = could not find container \"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\": container with ID starting with fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.816969 4849 scope.go:117] "RemoveContainer" containerID="e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.817199 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3"} err="failed to get container status \"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\": rpc error: code = NotFound desc = could not find container \"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\": container with ID starting with e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.817222 4849 scope.go:117] "RemoveContainer" containerID="1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.817576 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90"} err="failed to get container status \"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\": rpc error: code = NotFound desc = could not find container \"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\": container with ID starting with 1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.817596 4849 scope.go:117] "RemoveContainer" containerID="13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.817873 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3"} err="failed to get container status \"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\": rpc error: code = NotFound desc = could not find container \"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\": container with ID starting with 13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.817892 4849 scope.go:117] "RemoveContainer" containerID="dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.818064 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090"} err="failed to get container status \"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\": rpc error: code = NotFound desc = could not find container \"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\": container with ID starting with dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.818083 4849 scope.go:117] "RemoveContainer" containerID="7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.818351 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5"} err="failed to get container status \"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\": rpc error: code = NotFound desc = could not find container \"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\": container with ID starting with 7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.818370 4849 scope.go:117] "RemoveContainer" containerID="36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.818551 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4"} err="failed to get container status \"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\": rpc error: code = NotFound desc = could not find container \"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\": container with ID starting with 36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.818570 4849 scope.go:117] "RemoveContainer" containerID="df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.818862 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602"} err="failed to get container status \"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602\": rpc error: code = NotFound desc = could not find container \"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602\": container with ID starting with df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.818880 4849 scope.go:117] "RemoveContainer" containerID="780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.819047 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9"} err="failed to get container status \"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9\": rpc error: code = NotFound desc = could not find container \"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9\": container with ID starting with 780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.819097 4849 scope.go:117] "RemoveContainer" containerID="691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.819265 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e"} err="failed to get container status \"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\": rpc error: code = NotFound desc = could not find container \"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\": container with ID starting with 691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.819296 4849 scope.go:117] "RemoveContainer" containerID="fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.819450 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc"} err="failed to get container status \"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\": rpc error: code = NotFound desc = could not find container \"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\": container with ID starting with fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.819473 4849 scope.go:117] "RemoveContainer" containerID="e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.819715 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3"} err="failed to get container status \"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\": rpc error: code = NotFound desc = could not find container \"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\": container with ID starting with e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.819736 4849 scope.go:117] "RemoveContainer" containerID="1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.819990 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90"} err="failed to get container status \"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\": rpc error: code = NotFound desc = could not find container \"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\": container with ID starting with 1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.820022 4849 scope.go:117] "RemoveContainer" containerID="13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.820530 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3"} err="failed to get container status \"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\": rpc error: code = NotFound desc = could not find container \"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\": container with ID starting with 13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.820548 4849 scope.go:117] "RemoveContainer" containerID="dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.820779 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090"} err="failed to get container status \"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\": rpc error: code = NotFound desc = could not find container \"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\": container with ID starting with dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.820795 4849 scope.go:117] "RemoveContainer" containerID="7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.821001 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5"} err="failed to get container status \"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\": rpc error: code = NotFound desc = could not find container \"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\": container with ID starting with 7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.821018 4849 scope.go:117] "RemoveContainer" containerID="36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.821167 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4"} err="failed to get container status \"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\": rpc error: code = NotFound desc = could not find container \"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\": container with ID starting with 36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.821183 4849 scope.go:117] "RemoveContainer" containerID="df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.821333 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602"} err="failed to get container status \"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602\": rpc error: code = NotFound desc = could not find container \"df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602\": container with ID starting with df9debee613ea6d0dfb983fcd82268d125ed74cb0b004c4abb7c3ce96c43c602 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.821349 4849 scope.go:117] "RemoveContainer" containerID="780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.821501 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9"} err="failed to get container status \"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9\": rpc error: code = NotFound desc = could not find container \"780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9\": container with ID starting with 780916df53c4952a615ddb4422d20e30393f272b90420306357131aeab42cee9 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.821524 4849 scope.go:117] "RemoveContainer" containerID="691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.821723 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e"} err="failed to get container status \"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\": rpc error: code = NotFound desc = could not find container \"691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e\": container with ID starting with 691bd1716b7318ffd0a57d6b22b958126facf85402ba93b20d2eb243cb5aae9e not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.821739 4849 scope.go:117] "RemoveContainer" containerID="fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.822258 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc"} err="failed to get container status \"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\": rpc error: code = NotFound desc = could not find container \"fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc\": container with ID starting with fc728aa5ee88cf89092550d53e1abf02eb145ac3ab3be3cea0823a62fd6e57cc not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.822312 4849 scope.go:117] "RemoveContainer" containerID="e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.823147 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3"} err="failed to get container status \"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\": rpc error: code = NotFound desc = could not find container \"e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3\": container with ID starting with e7cfa37fdfd88d052964f59631b8633e0b36520b74b13d8eac44f1d60489c4d3 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.823187 4849 scope.go:117] "RemoveContainer" containerID="1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.823404 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90"} err="failed to get container status \"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\": rpc error: code = NotFound desc = could not find container \"1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90\": container with ID starting with 1ab87ddd0d3e3c17ed61230feacafea682885b7b6d7ca1c857c40612331bcd90 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.823445 4849 scope.go:117] "RemoveContainer" containerID="13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.823634 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3"} err="failed to get container status \"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\": rpc error: code = NotFound desc = could not find container \"13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3\": container with ID starting with 13b0228d772b7373cfa9f0848dec54acdf95d51f211351c54721c8adbf7a38f3 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.823652 4849 scope.go:117] "RemoveContainer" containerID="dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.823883 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090"} err="failed to get container status \"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\": rpc error: code = NotFound desc = could not find container \"dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090\": container with ID starting with dd3c050827836acd506f706348f6e02f1ea048c7b34b2b75201c70c6c89e0090 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.823919 4849 scope.go:117] "RemoveContainer" containerID="7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.824944 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5"} err="failed to get container status \"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\": rpc error: code = NotFound desc = could not find container \"7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5\": container with ID starting with 7fc26acd653db4a9d4679ed026180ffe94cbb2b60c05a54ce154b77d041c4ca5 not found: ID does not exist" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.825015 4849 scope.go:117] "RemoveContainer" containerID="36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4" Dec 09 11:38:58 crc kubenswrapper[4849]: I1209 11:38:58.825468 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4"} err="failed to get container status \"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\": rpc error: code = NotFound desc = could not find container \"36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4\": container with ID starting with 36fa102715f6ce4fdf80e041c4c1ba6c7270eec7b18c8e6695d0ee95824c48c4 not found: ID does not exist" Dec 09 11:38:59 crc kubenswrapper[4849]: I1209 11:38:59.337335 4849 generic.go:334] "Generic (PLEG): container finished" podID="8c4dd07c-5697-42ce-aede-929ba820f840" containerID="56372fea6d72cd8418021e4cc8daf79cdb7d68b21a7c79af2221a3129afe1aff" exitCode=0 Dec 09 11:38:59 crc kubenswrapper[4849]: I1209 11:38:59.337432 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" event={"ID":"8c4dd07c-5697-42ce-aede-929ba820f840","Type":"ContainerDied","Data":"56372fea6d72cd8418021e4cc8daf79cdb7d68b21a7c79af2221a3129afe1aff"} Dec 09 11:38:59 crc kubenswrapper[4849]: I1209 11:38:59.338481 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" event={"ID":"8c4dd07c-5697-42ce-aede-929ba820f840","Type":"ContainerStarted","Data":"b2b68596ac2dcb6e1eacd3e5f1e5f7a311dab9c0933803119d31f6c8cc71d0d4"} Dec 09 11:38:59 crc kubenswrapper[4849]: I1209 11:38:59.340792 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h76bl_e5c6e29f-6131-4daa-b297-81eb53e7384c/kube-multus/2.log" Dec 09 11:39:00 crc kubenswrapper[4849]: I1209 11:39:00.349380 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" event={"ID":"8c4dd07c-5697-42ce-aede-929ba820f840","Type":"ContainerStarted","Data":"ea4163d3911dff68211820a3c8b967536aefbfb6046016c2d4d942a838a73ac9"} Dec 09 11:39:00 crc kubenswrapper[4849]: I1209 11:39:00.349757 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" event={"ID":"8c4dd07c-5697-42ce-aede-929ba820f840","Type":"ContainerStarted","Data":"3671b0261f2c5c397685ead2d0a6855c82303326c0b7adcb47845f23775c90d2"} Dec 09 11:39:00 crc kubenswrapper[4849]: I1209 11:39:00.349770 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" event={"ID":"8c4dd07c-5697-42ce-aede-929ba820f840","Type":"ContainerStarted","Data":"f4590b62c38a38a742dc01840eac519052b048014cb8a827b34f32b457ec67ce"} Dec 09 11:39:00 crc kubenswrapper[4849]: I1209 11:39:00.349780 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" event={"ID":"8c4dd07c-5697-42ce-aede-929ba820f840","Type":"ContainerStarted","Data":"9b3b002092f97b40223ca8ee55b31394155fe0ae2af35cc65b953ecff6d914de"} Dec 09 11:39:00 crc kubenswrapper[4849]: I1209 11:39:00.349788 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" event={"ID":"8c4dd07c-5697-42ce-aede-929ba820f840","Type":"ContainerStarted","Data":"e5e55cd03227c866856b12b58627966fc2c4f0b8ce6a1063051074c675ebe8c1"} Dec 09 11:39:00 crc kubenswrapper[4849]: I1209 11:39:00.349799 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" event={"ID":"8c4dd07c-5697-42ce-aede-929ba820f840","Type":"ContainerStarted","Data":"0035fb085c2428bed29f89c6b9477858149c66294f96139d58b59f9327821e1e"} Dec 09 11:39:00 crc kubenswrapper[4849]: I1209 11:39:00.544117 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="205e41c5-82b8-4bac-a27a-49f1e0da94e5" path="/var/lib/kubelet/pods/205e41c5-82b8-4bac-a27a-49f1e0da94e5/volumes" Dec 09 11:39:02 crc kubenswrapper[4849]: I1209 11:39:02.364603 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" event={"ID":"8c4dd07c-5697-42ce-aede-929ba820f840","Type":"ContainerStarted","Data":"fcb4005a618d2e8d2180a96e21113030092d3b52866d0df1f7f710b1ef4ec962"} Dec 09 11:39:05 crc kubenswrapper[4849]: I1209 11:39:05.385774 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" event={"ID":"8c4dd07c-5697-42ce-aede-929ba820f840","Type":"ContainerStarted","Data":"8f19395d373c52de41c0852df760cef258d8f20d7e52cc244515d0c6205f7364"} Dec 09 11:39:05 crc kubenswrapper[4849]: I1209 11:39:05.386358 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:39:05 crc kubenswrapper[4849]: I1209 11:39:05.386371 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:39:05 crc kubenswrapper[4849]: I1209 11:39:05.437799 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:39:05 crc kubenswrapper[4849]: I1209 11:39:05.441123 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" podStartSLOduration=7.441104966 podStartE2EDuration="7.441104966s" podCreationTimestamp="2025-12-09 11:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:39:05.436897912 +0000 UTC m=+727.976782238" watchObservedRunningTime="2025-12-09 11:39:05.441104966 +0000 UTC m=+727.980989282" Dec 09 11:39:06 crc kubenswrapper[4849]: I1209 11:39:06.391302 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:39:06 crc kubenswrapper[4849]: I1209 11:39:06.421126 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:39:09 crc kubenswrapper[4849]: I1209 11:39:09.535939 4849 scope.go:117] "RemoveContainer" containerID="ebf4aaa40d1d01e3c26b272ee565c54370454d5bf20e9cf2c3c36076426c1c4d" Dec 09 11:39:09 crc kubenswrapper[4849]: E1209 11:39:09.536206 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-h76bl_openshift-multus(e5c6e29f-6131-4daa-b297-81eb53e7384c)\"" pod="openshift-multus/multus-h76bl" podUID="e5c6e29f-6131-4daa-b297-81eb53e7384c" Dec 09 11:39:21 crc kubenswrapper[4849]: I1209 11:39:21.132908 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:39:21 crc kubenswrapper[4849]: I1209 11:39:21.133504 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:39:23 crc kubenswrapper[4849]: I1209 11:39:23.536637 4849 scope.go:117] "RemoveContainer" containerID="ebf4aaa40d1d01e3c26b272ee565c54370454d5bf20e9cf2c3c36076426c1c4d" Dec 09 11:39:24 crc kubenswrapper[4849]: I1209 11:39:24.494246 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h76bl_e5c6e29f-6131-4daa-b297-81eb53e7384c/kube-multus/2.log" Dec 09 11:39:24 crc kubenswrapper[4849]: I1209 11:39:24.494607 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h76bl" event={"ID":"e5c6e29f-6131-4daa-b297-81eb53e7384c","Type":"ContainerStarted","Data":"b558de2ab83964b7af890984273ee5094486f973104bb542d5741315a8996d2b"} Dec 09 11:39:27 crc kubenswrapper[4849]: I1209 11:39:27.683735 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd"] Dec 09 11:39:27 crc kubenswrapper[4849]: I1209 11:39:27.685363 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" Dec 09 11:39:27 crc kubenswrapper[4849]: I1209 11:39:27.688720 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 11:39:27 crc kubenswrapper[4849]: I1209 11:39:27.694157 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd"] Dec 09 11:39:27 crc kubenswrapper[4849]: I1209 11:39:27.711850 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g72s\" (UniqueName: \"kubernetes.io/projected/102d8ac7-6bbd-4f2b-874d-345a57d9986f-kube-api-access-5g72s\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd\" (UID: \"102d8ac7-6bbd-4f2b-874d-345a57d9986f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" Dec 09 11:39:27 crc kubenswrapper[4849]: I1209 11:39:27.711918 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/102d8ac7-6bbd-4f2b-874d-345a57d9986f-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd\" (UID: \"102d8ac7-6bbd-4f2b-874d-345a57d9986f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" Dec 09 11:39:27 crc kubenswrapper[4849]: I1209 11:39:27.711953 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/102d8ac7-6bbd-4f2b-874d-345a57d9986f-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd\" (UID: \"102d8ac7-6bbd-4f2b-874d-345a57d9986f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" Dec 09 11:39:27 crc kubenswrapper[4849]: I1209 11:39:27.813057 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g72s\" (UniqueName: \"kubernetes.io/projected/102d8ac7-6bbd-4f2b-874d-345a57d9986f-kube-api-access-5g72s\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd\" (UID: \"102d8ac7-6bbd-4f2b-874d-345a57d9986f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" Dec 09 11:39:27 crc kubenswrapper[4849]: I1209 11:39:27.813125 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/102d8ac7-6bbd-4f2b-874d-345a57d9986f-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd\" (UID: \"102d8ac7-6bbd-4f2b-874d-345a57d9986f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" Dec 09 11:39:27 crc kubenswrapper[4849]: I1209 11:39:27.813165 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/102d8ac7-6bbd-4f2b-874d-345a57d9986f-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd\" (UID: \"102d8ac7-6bbd-4f2b-874d-345a57d9986f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" Dec 09 11:39:27 crc kubenswrapper[4849]: I1209 11:39:27.813661 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/102d8ac7-6bbd-4f2b-874d-345a57d9986f-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd\" (UID: \"102d8ac7-6bbd-4f2b-874d-345a57d9986f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" Dec 09 11:39:27 crc kubenswrapper[4849]: I1209 11:39:27.813774 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/102d8ac7-6bbd-4f2b-874d-345a57d9986f-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd\" (UID: \"102d8ac7-6bbd-4f2b-874d-345a57d9986f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" Dec 09 11:39:27 crc kubenswrapper[4849]: I1209 11:39:27.833508 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g72s\" (UniqueName: \"kubernetes.io/projected/102d8ac7-6bbd-4f2b-874d-345a57d9986f-kube-api-access-5g72s\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd\" (UID: \"102d8ac7-6bbd-4f2b-874d-345a57d9986f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" Dec 09 11:39:27 crc kubenswrapper[4849]: I1209 11:39:27.999633 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" Dec 09 11:39:28 crc kubenswrapper[4849]: I1209 11:39:28.221515 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd"] Dec 09 11:39:28 crc kubenswrapper[4849]: W1209 11:39:28.266214 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod102d8ac7_6bbd_4f2b_874d_345a57d9986f.slice/crio-bb50f6ce78f395fc8728bf341c8f4281ff125c3b26295c0aedbb2a6bb152b774 WatchSource:0}: Error finding container bb50f6ce78f395fc8728bf341c8f4281ff125c3b26295c0aedbb2a6bb152b774: Status 404 returned error can't find the container with id bb50f6ce78f395fc8728bf341c8f4281ff125c3b26295c0aedbb2a6bb152b774 Dec 09 11:39:28 crc kubenswrapper[4849]: I1209 11:39:28.519154 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" event={"ID":"102d8ac7-6bbd-4f2b-874d-345a57d9986f","Type":"ContainerStarted","Data":"0a0defe140be1317e9b18d6d19753a349e4ad773e4ede1eac85c9ed2e02f326a"} Dec 09 11:39:28 crc kubenswrapper[4849]: I1209 11:39:28.519730 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" event={"ID":"102d8ac7-6bbd-4f2b-874d-345a57d9986f","Type":"ContainerStarted","Data":"bb50f6ce78f395fc8728bf341c8f4281ff125c3b26295c0aedbb2a6bb152b774"} Dec 09 11:39:28 crc kubenswrapper[4849]: I1209 11:39:28.551563 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gwqc4" Dec 09 11:39:29 crc kubenswrapper[4849]: I1209 11:39:29.525258 4849 generic.go:334] "Generic (PLEG): container finished" podID="102d8ac7-6bbd-4f2b-874d-345a57d9986f" containerID="0a0defe140be1317e9b18d6d19753a349e4ad773e4ede1eac85c9ed2e02f326a" exitCode=0 Dec 09 11:39:29 crc kubenswrapper[4849]: I1209 11:39:29.525530 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" event={"ID":"102d8ac7-6bbd-4f2b-874d-345a57d9986f","Type":"ContainerDied","Data":"0a0defe140be1317e9b18d6d19753a349e4ad773e4ede1eac85c9ed2e02f326a"} Dec 09 11:39:32 crc kubenswrapper[4849]: I1209 11:39:32.543162 4849 generic.go:334] "Generic (PLEG): container finished" podID="102d8ac7-6bbd-4f2b-874d-345a57d9986f" containerID="e6e102c41ce173298d0ce4de31ae02b8e67c437037f14e42d3eb23b9abaee4e7" exitCode=0 Dec 09 11:39:32 crc kubenswrapper[4849]: I1209 11:39:32.543554 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" event={"ID":"102d8ac7-6bbd-4f2b-874d-345a57d9986f","Type":"ContainerDied","Data":"e6e102c41ce173298d0ce4de31ae02b8e67c437037f14e42d3eb23b9abaee4e7"} Dec 09 11:39:33 crc kubenswrapper[4849]: I1209 11:39:33.552164 4849 generic.go:334] "Generic (PLEG): container finished" podID="102d8ac7-6bbd-4f2b-874d-345a57d9986f" containerID="132032ff1c32821b39a072d735e6c1f28bfdf068b4bd1320d73f50c37789d481" exitCode=0 Dec 09 11:39:33 crc kubenswrapper[4849]: I1209 11:39:33.552216 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" event={"ID":"102d8ac7-6bbd-4f2b-874d-345a57d9986f","Type":"ContainerDied","Data":"132032ff1c32821b39a072d735e6c1f28bfdf068b4bd1320d73f50c37789d481"} Dec 09 11:39:35 crc kubenswrapper[4849]: I1209 11:39:35.018638 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" Dec 09 11:39:35 crc kubenswrapper[4849]: I1209 11:39:35.205039 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/102d8ac7-6bbd-4f2b-874d-345a57d9986f-util\") pod \"102d8ac7-6bbd-4f2b-874d-345a57d9986f\" (UID: \"102d8ac7-6bbd-4f2b-874d-345a57d9986f\") " Dec 09 11:39:35 crc kubenswrapper[4849]: I1209 11:39:35.205095 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g72s\" (UniqueName: \"kubernetes.io/projected/102d8ac7-6bbd-4f2b-874d-345a57d9986f-kube-api-access-5g72s\") pod \"102d8ac7-6bbd-4f2b-874d-345a57d9986f\" (UID: \"102d8ac7-6bbd-4f2b-874d-345a57d9986f\") " Dec 09 11:39:35 crc kubenswrapper[4849]: I1209 11:39:35.205122 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/102d8ac7-6bbd-4f2b-874d-345a57d9986f-bundle\") pod \"102d8ac7-6bbd-4f2b-874d-345a57d9986f\" (UID: \"102d8ac7-6bbd-4f2b-874d-345a57d9986f\") " Dec 09 11:39:35 crc kubenswrapper[4849]: I1209 11:39:35.205829 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/102d8ac7-6bbd-4f2b-874d-345a57d9986f-bundle" (OuterVolumeSpecName: "bundle") pod "102d8ac7-6bbd-4f2b-874d-345a57d9986f" (UID: "102d8ac7-6bbd-4f2b-874d-345a57d9986f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:39:35 crc kubenswrapper[4849]: I1209 11:39:35.216798 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102d8ac7-6bbd-4f2b-874d-345a57d9986f-kube-api-access-5g72s" (OuterVolumeSpecName: "kube-api-access-5g72s") pod "102d8ac7-6bbd-4f2b-874d-345a57d9986f" (UID: "102d8ac7-6bbd-4f2b-874d-345a57d9986f"). InnerVolumeSpecName "kube-api-access-5g72s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:39:35 crc kubenswrapper[4849]: I1209 11:39:35.225363 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/102d8ac7-6bbd-4f2b-874d-345a57d9986f-util" (OuterVolumeSpecName: "util") pod "102d8ac7-6bbd-4f2b-874d-345a57d9986f" (UID: "102d8ac7-6bbd-4f2b-874d-345a57d9986f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:39:35 crc kubenswrapper[4849]: I1209 11:39:35.306444 4849 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/102d8ac7-6bbd-4f2b-874d-345a57d9986f-util\") on node \"crc\" DevicePath \"\"" Dec 09 11:39:35 crc kubenswrapper[4849]: I1209 11:39:35.306490 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g72s\" (UniqueName: \"kubernetes.io/projected/102d8ac7-6bbd-4f2b-874d-345a57d9986f-kube-api-access-5g72s\") on node \"crc\" DevicePath \"\"" Dec 09 11:39:35 crc kubenswrapper[4849]: I1209 11:39:35.306501 4849 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/102d8ac7-6bbd-4f2b-874d-345a57d9986f-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:39:35 crc kubenswrapper[4849]: I1209 11:39:35.565599 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" event={"ID":"102d8ac7-6bbd-4f2b-874d-345a57d9986f","Type":"ContainerDied","Data":"bb50f6ce78f395fc8728bf341c8f4281ff125c3b26295c0aedbb2a6bb152b774"} Dec 09 11:39:35 crc kubenswrapper[4849]: I1209 11:39:35.565662 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb50f6ce78f395fc8728bf341c8f4281ff125c3b26295c0aedbb2a6bb152b774" Dec 09 11:39:35 crc kubenswrapper[4849]: I1209 11:39:35.565759 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd" Dec 09 11:39:36 crc kubenswrapper[4849]: I1209 11:39:36.848498 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-c7nwd"] Dec 09 11:39:36 crc kubenswrapper[4849]: E1209 11:39:36.848765 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102d8ac7-6bbd-4f2b-874d-345a57d9986f" containerName="extract" Dec 09 11:39:36 crc kubenswrapper[4849]: I1209 11:39:36.848783 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="102d8ac7-6bbd-4f2b-874d-345a57d9986f" containerName="extract" Dec 09 11:39:36 crc kubenswrapper[4849]: E1209 11:39:36.848803 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102d8ac7-6bbd-4f2b-874d-345a57d9986f" containerName="util" Dec 09 11:39:36 crc kubenswrapper[4849]: I1209 11:39:36.848825 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="102d8ac7-6bbd-4f2b-874d-345a57d9986f" containerName="util" Dec 09 11:39:36 crc kubenswrapper[4849]: E1209 11:39:36.848836 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102d8ac7-6bbd-4f2b-874d-345a57d9986f" containerName="pull" Dec 09 11:39:36 crc kubenswrapper[4849]: I1209 11:39:36.848844 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="102d8ac7-6bbd-4f2b-874d-345a57d9986f" containerName="pull" Dec 09 11:39:36 crc kubenswrapper[4849]: I1209 11:39:36.848963 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="102d8ac7-6bbd-4f2b-874d-345a57d9986f" containerName="extract" Dec 09 11:39:36 crc kubenswrapper[4849]: I1209 11:39:36.849454 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-c7nwd" Dec 09 11:39:36 crc kubenswrapper[4849]: I1209 11:39:36.855037 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 09 11:39:36 crc kubenswrapper[4849]: I1209 11:39:36.855106 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-sxlw5" Dec 09 11:39:36 crc kubenswrapper[4849]: I1209 11:39:36.855361 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 09 11:39:36 crc kubenswrapper[4849]: I1209 11:39:36.864561 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-c7nwd"] Dec 09 11:39:36 crc kubenswrapper[4849]: I1209 11:39:36.923703 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm2dj\" (UniqueName: \"kubernetes.io/projected/6a094363-5e56-4743-99fb-4fc11e2d74cd-kube-api-access-nm2dj\") pod \"nmstate-operator-5b5b58f5c8-c7nwd\" (UID: \"6a094363-5e56-4743-99fb-4fc11e2d74cd\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-c7nwd" Dec 09 11:39:37 crc kubenswrapper[4849]: I1209 11:39:37.024739 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm2dj\" (UniqueName: \"kubernetes.io/projected/6a094363-5e56-4743-99fb-4fc11e2d74cd-kube-api-access-nm2dj\") pod \"nmstate-operator-5b5b58f5c8-c7nwd\" (UID: \"6a094363-5e56-4743-99fb-4fc11e2d74cd\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-c7nwd" Dec 09 11:39:37 crc kubenswrapper[4849]: I1209 11:39:37.047443 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm2dj\" (UniqueName: \"kubernetes.io/projected/6a094363-5e56-4743-99fb-4fc11e2d74cd-kube-api-access-nm2dj\") pod \"nmstate-operator-5b5b58f5c8-c7nwd\" (UID: \"6a094363-5e56-4743-99fb-4fc11e2d74cd\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-c7nwd" Dec 09 11:39:37 crc kubenswrapper[4849]: I1209 11:39:37.167045 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-c7nwd" Dec 09 11:39:37 crc kubenswrapper[4849]: I1209 11:39:37.712777 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-c7nwd"] Dec 09 11:39:37 crc kubenswrapper[4849]: W1209 11:39:37.724613 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a094363_5e56_4743_99fb_4fc11e2d74cd.slice/crio-c6b1d7bc8eeda63e0d384794980ba78189e2b67aee4e0d070ef34cccd6c3385c WatchSource:0}: Error finding container c6b1d7bc8eeda63e0d384794980ba78189e2b67aee4e0d070ef34cccd6c3385c: Status 404 returned error can't find the container with id c6b1d7bc8eeda63e0d384794980ba78189e2b67aee4e0d070ef34cccd6c3385c Dec 09 11:39:38 crc kubenswrapper[4849]: I1209 11:39:38.585279 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-c7nwd" event={"ID":"6a094363-5e56-4743-99fb-4fc11e2d74cd","Type":"ContainerStarted","Data":"c6b1d7bc8eeda63e0d384794980ba78189e2b67aee4e0d070ef34cccd6c3385c"} Dec 09 11:39:40 crc kubenswrapper[4849]: I1209 11:39:40.597448 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-c7nwd" event={"ID":"6a094363-5e56-4743-99fb-4fc11e2d74cd","Type":"ContainerStarted","Data":"383065d61be3a32f8a7c60c36d55c21ee514d81ec30fa8232a42e0be6a36a658"} Dec 09 11:39:40 crc kubenswrapper[4849]: I1209 11:39:40.613978 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-c7nwd" podStartSLOduration=2.368504649 podStartE2EDuration="4.613961983s" podCreationTimestamp="2025-12-09 11:39:36 +0000 UTC" firstStartedPulling="2025-12-09 11:39:37.727062837 +0000 UTC m=+760.266947153" lastFinishedPulling="2025-12-09 11:39:39.972520171 +0000 UTC m=+762.512404487" observedRunningTime="2025-12-09 11:39:40.61103888 +0000 UTC m=+763.150923206" watchObservedRunningTime="2025-12-09 11:39:40.613961983 +0000 UTC m=+763.153846299" Dec 09 11:39:42 crc kubenswrapper[4849]: I1209 11:39:42.473727 4849 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 11:39:47 crc kubenswrapper[4849]: I1209 11:39:47.986556 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-4d6rl"] Dec 09 11:39:47 crc kubenswrapper[4849]: I1209 11:39:47.988173 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4d6rl" Dec 09 11:39:47 crc kubenswrapper[4849]: I1209 11:39:47.989739 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-ghxgl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.003951 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-4d6rl"] Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.009897 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv"] Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.010835 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.017970 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.037768 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv"] Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.043741 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-mlpgl"] Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.060642 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.078846 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-m2zmv\" (UID: \"e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.078900 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swmtq\" (UniqueName: \"kubernetes.io/projected/4412c89c-f551-4683-8682-8fc188bf086d-kube-api-access-swmtq\") pod \"nmstate-metrics-7f946cbc9-4d6rl\" (UID: \"4412c89c-f551-4683-8682-8fc188bf086d\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4d6rl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.078945 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktq4z\" (UniqueName: \"kubernetes.io/projected/e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518-kube-api-access-ktq4z\") pod \"nmstate-webhook-5f6d4c5ccb-m2zmv\" (UID: \"e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.161251 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r"] Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.163578 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.167920 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.169714 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-jkwbz" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.173198 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r"] Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.177590 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.180686 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6f62a435-b00e-4eba-a243-91c18c9639e4-nmstate-lock\") pod \"nmstate-handler-mlpgl\" (UID: \"6f62a435-b00e-4eba-a243-91c18c9639e4\") " pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.180727 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knsbq\" (UniqueName: \"kubernetes.io/projected/6f62a435-b00e-4eba-a243-91c18c9639e4-kube-api-access-knsbq\") pod \"nmstate-handler-mlpgl\" (UID: \"6f62a435-b00e-4eba-a243-91c18c9639e4\") " pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.180763 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6f62a435-b00e-4eba-a243-91c18c9639e4-dbus-socket\") pod \"nmstate-handler-mlpgl\" (UID: \"6f62a435-b00e-4eba-a243-91c18c9639e4\") " pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.181102 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-m2zmv\" (UID: \"e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.181219 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swmtq\" (UniqueName: \"kubernetes.io/projected/4412c89c-f551-4683-8682-8fc188bf086d-kube-api-access-swmtq\") pod \"nmstate-metrics-7f946cbc9-4d6rl\" (UID: \"4412c89c-f551-4683-8682-8fc188bf086d\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4d6rl" Dec 09 11:39:48 crc kubenswrapper[4849]: E1209 11:39:48.181229 4849 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.181279 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6f62a435-b00e-4eba-a243-91c18c9639e4-ovs-socket\") pod \"nmstate-handler-mlpgl\" (UID: \"6f62a435-b00e-4eba-a243-91c18c9639e4\") " pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:48 crc kubenswrapper[4849]: E1209 11:39:48.181352 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518-tls-key-pair podName:e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518 nodeName:}" failed. No retries permitted until 2025-12-09 11:39:48.681327162 +0000 UTC m=+771.221211478 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-m2zmv" (UID: "e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518") : secret "openshift-nmstate-webhook" not found Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.181526 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktq4z\" (UniqueName: \"kubernetes.io/projected/e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518-kube-api-access-ktq4z\") pod \"nmstate-webhook-5f6d4c5ccb-m2zmv\" (UID: \"e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.202688 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktq4z\" (UniqueName: \"kubernetes.io/projected/e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518-kube-api-access-ktq4z\") pod \"nmstate-webhook-5f6d4c5ccb-m2zmv\" (UID: \"e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.213351 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swmtq\" (UniqueName: \"kubernetes.io/projected/4412c89c-f551-4683-8682-8fc188bf086d-kube-api-access-swmtq\") pod \"nmstate-metrics-7f946cbc9-4d6rl\" (UID: \"4412c89c-f551-4683-8682-8fc188bf086d\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4d6rl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.282960 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3c6cd138-dbe0-4baf-a149-341d01905fc8-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-fsp9r\" (UID: \"3c6cd138-dbe0-4baf-a149-341d01905fc8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.283030 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6f62a435-b00e-4eba-a243-91c18c9639e4-ovs-socket\") pod \"nmstate-handler-mlpgl\" (UID: \"6f62a435-b00e-4eba-a243-91c18c9639e4\") " pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.283085 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6f62a435-b00e-4eba-a243-91c18c9639e4-ovs-socket\") pod \"nmstate-handler-mlpgl\" (UID: \"6f62a435-b00e-4eba-a243-91c18c9639e4\") " pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.283117 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c6cd138-dbe0-4baf-a149-341d01905fc8-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-fsp9r\" (UID: \"3c6cd138-dbe0-4baf-a149-341d01905fc8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.283254 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6f62a435-b00e-4eba-a243-91c18c9639e4-nmstate-lock\") pod \"nmstate-handler-mlpgl\" (UID: \"6f62a435-b00e-4eba-a243-91c18c9639e4\") " pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.283276 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6f62a435-b00e-4eba-a243-91c18c9639e4-nmstate-lock\") pod \"nmstate-handler-mlpgl\" (UID: \"6f62a435-b00e-4eba-a243-91c18c9639e4\") " pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.283286 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knsbq\" (UniqueName: \"kubernetes.io/projected/6f62a435-b00e-4eba-a243-91c18c9639e4-kube-api-access-knsbq\") pod \"nmstate-handler-mlpgl\" (UID: \"6f62a435-b00e-4eba-a243-91c18c9639e4\") " pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.283335 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7szg6\" (UniqueName: \"kubernetes.io/projected/3c6cd138-dbe0-4baf-a149-341d01905fc8-kube-api-access-7szg6\") pod \"nmstate-console-plugin-7fbb5f6569-fsp9r\" (UID: \"3c6cd138-dbe0-4baf-a149-341d01905fc8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.283372 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6f62a435-b00e-4eba-a243-91c18c9639e4-dbus-socket\") pod \"nmstate-handler-mlpgl\" (UID: \"6f62a435-b00e-4eba-a243-91c18c9639e4\") " pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.283728 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6f62a435-b00e-4eba-a243-91c18c9639e4-dbus-socket\") pod \"nmstate-handler-mlpgl\" (UID: \"6f62a435-b00e-4eba-a243-91c18c9639e4\") " pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.305790 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4d6rl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.333068 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knsbq\" (UniqueName: \"kubernetes.io/projected/6f62a435-b00e-4eba-a243-91c18c9639e4-kube-api-access-knsbq\") pod \"nmstate-handler-mlpgl\" (UID: \"6f62a435-b00e-4eba-a243-91c18c9639e4\") " pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.380333 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.384915 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7szg6\" (UniqueName: \"kubernetes.io/projected/3c6cd138-dbe0-4baf-a149-341d01905fc8-kube-api-access-7szg6\") pod \"nmstate-console-plugin-7fbb5f6569-fsp9r\" (UID: \"3c6cd138-dbe0-4baf-a149-341d01905fc8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.384983 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3c6cd138-dbe0-4baf-a149-341d01905fc8-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-fsp9r\" (UID: \"3c6cd138-dbe0-4baf-a149-341d01905fc8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.385027 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c6cd138-dbe0-4baf-a149-341d01905fc8-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-fsp9r\" (UID: \"3c6cd138-dbe0-4baf-a149-341d01905fc8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.387625 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3c6cd138-dbe0-4baf-a149-341d01905fc8-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-fsp9r\" (UID: \"3c6cd138-dbe0-4baf-a149-341d01905fc8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.400265 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c6cd138-dbe0-4baf-a149-341d01905fc8-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-fsp9r\" (UID: \"3c6cd138-dbe0-4baf-a149-341d01905fc8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.412494 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-77c4f49446-f5928"] Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.414013 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.418739 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7szg6\" (UniqueName: \"kubernetes.io/projected/3c6cd138-dbe0-4baf-a149-341d01905fc8-kube-api-access-7szg6\") pod \"nmstate-console-plugin-7fbb5f6569-fsp9r\" (UID: \"3c6cd138-dbe0-4baf-a149-341d01905fc8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.426431 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-77c4f49446-f5928"] Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.477609 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.485949 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e72549fa-ea28-4866-82cc-c636a68e192a-console-config\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.486004 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e72549fa-ea28-4866-82cc-c636a68e192a-console-oauth-config\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.486026 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p674n\" (UniqueName: \"kubernetes.io/projected/e72549fa-ea28-4866-82cc-c636a68e192a-kube-api-access-p674n\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.486050 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e72549fa-ea28-4866-82cc-c636a68e192a-oauth-serving-cert\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.486101 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e72549fa-ea28-4866-82cc-c636a68e192a-trusted-ca-bundle\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.486117 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e72549fa-ea28-4866-82cc-c636a68e192a-service-ca\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.486132 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e72549fa-ea28-4866-82cc-c636a68e192a-console-serving-cert\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.587041 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e72549fa-ea28-4866-82cc-c636a68e192a-oauth-serving-cert\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.587144 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e72549fa-ea28-4866-82cc-c636a68e192a-trusted-ca-bundle\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.587172 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e72549fa-ea28-4866-82cc-c636a68e192a-service-ca\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.587189 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e72549fa-ea28-4866-82cc-c636a68e192a-console-serving-cert\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.587210 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e72549fa-ea28-4866-82cc-c636a68e192a-console-config\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.587252 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e72549fa-ea28-4866-82cc-c636a68e192a-console-oauth-config\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.587279 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p674n\" (UniqueName: \"kubernetes.io/projected/e72549fa-ea28-4866-82cc-c636a68e192a-kube-api-access-p674n\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.587854 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e72549fa-ea28-4866-82cc-c636a68e192a-oauth-serving-cert\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.588976 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e72549fa-ea28-4866-82cc-c636a68e192a-console-config\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.589228 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e72549fa-ea28-4866-82cc-c636a68e192a-trusted-ca-bundle\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.592401 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e72549fa-ea28-4866-82cc-c636a68e192a-service-ca\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.604259 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e72549fa-ea28-4866-82cc-c636a68e192a-console-oauth-config\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.613809 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e72549fa-ea28-4866-82cc-c636a68e192a-console-serving-cert\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.623167 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p674n\" (UniqueName: \"kubernetes.io/projected/e72549fa-ea28-4866-82cc-c636a68e192a-kube-api-access-p674n\") pod \"console-77c4f49446-f5928\" (UID: \"e72549fa-ea28-4866-82cc-c636a68e192a\") " pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.645570 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mlpgl" event={"ID":"6f62a435-b00e-4eba-a243-91c18c9639e4","Type":"ContainerStarted","Data":"ab0ef609cc48e8c3f48450be19e6cb1748701b69f493291dc7209cdeda057c6a"} Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.680656 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-4d6rl"] Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.691817 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-m2zmv\" (UID: \"e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.699361 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-m2zmv\" (UID: \"e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.751660 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.922312 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-77c4f49446-f5928"] Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.924874 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv" Dec 09 11:39:48 crc kubenswrapper[4849]: W1209 11:39:48.937291 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode72549fa_ea28_4866_82cc_c636a68e192a.slice/crio-c1232e3fa8edd947cd509824823217ac3ceacd773bcc0ec6585578b17edc983d WatchSource:0}: Error finding container c1232e3fa8edd947cd509824823217ac3ceacd773bcc0ec6585578b17edc983d: Status 404 returned error can't find the container with id c1232e3fa8edd947cd509824823217ac3ceacd773bcc0ec6585578b17edc983d Dec 09 11:39:48 crc kubenswrapper[4849]: I1209 11:39:48.975193 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r"] Dec 09 11:39:48 crc kubenswrapper[4849]: W1209 11:39:48.984600 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c6cd138_dbe0_4baf_a149_341d01905fc8.slice/crio-952e99debeea59a266c66e8a0eefbfee7473a4189223e43f9be661edf245d2d7 WatchSource:0}: Error finding container 952e99debeea59a266c66e8a0eefbfee7473a4189223e43f9be661edf245d2d7: Status 404 returned error can't find the container with id 952e99debeea59a266c66e8a0eefbfee7473a4189223e43f9be661edf245d2d7 Dec 09 11:39:49 crc kubenswrapper[4849]: I1209 11:39:49.327041 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv"] Dec 09 11:39:49 crc kubenswrapper[4849]: W1209 11:39:49.334716 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6fe8bd2_eeed_4a5f_b2a8_eec7fd6b9518.slice/crio-724407c027df527f86135c3a4aeaa5b15e06dc053b51f7594380d178076522ea WatchSource:0}: Error finding container 724407c027df527f86135c3a4aeaa5b15e06dc053b51f7594380d178076522ea: Status 404 returned error can't find the container with id 724407c027df527f86135c3a4aeaa5b15e06dc053b51f7594380d178076522ea Dec 09 11:39:49 crc kubenswrapper[4849]: I1209 11:39:49.652093 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r" event={"ID":"3c6cd138-dbe0-4baf-a149-341d01905fc8","Type":"ContainerStarted","Data":"952e99debeea59a266c66e8a0eefbfee7473a4189223e43f9be661edf245d2d7"} Dec 09 11:39:49 crc kubenswrapper[4849]: I1209 11:39:49.653813 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77c4f49446-f5928" event={"ID":"e72549fa-ea28-4866-82cc-c636a68e192a","Type":"ContainerStarted","Data":"1bbb57c9752a7029bbf5b55ad53167ef6c3355539bcb1e7ac43d421f98417b06"} Dec 09 11:39:49 crc kubenswrapper[4849]: I1209 11:39:49.653862 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77c4f49446-f5928" event={"ID":"e72549fa-ea28-4866-82cc-c636a68e192a","Type":"ContainerStarted","Data":"c1232e3fa8edd947cd509824823217ac3ceacd773bcc0ec6585578b17edc983d"} Dec 09 11:39:49 crc kubenswrapper[4849]: I1209 11:39:49.654850 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv" event={"ID":"e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518","Type":"ContainerStarted","Data":"724407c027df527f86135c3a4aeaa5b15e06dc053b51f7594380d178076522ea"} Dec 09 11:39:49 crc kubenswrapper[4849]: I1209 11:39:49.656229 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4d6rl" event={"ID":"4412c89c-f551-4683-8682-8fc188bf086d","Type":"ContainerStarted","Data":"8cb1067974e67a5a04213fceaf396ecbca671f9d81bf30ca6d3b4ed6cba9493c"} Dec 09 11:39:49 crc kubenswrapper[4849]: I1209 11:39:49.678241 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-77c4f49446-f5928" podStartSLOduration=1.678220879 podStartE2EDuration="1.678220879s" podCreationTimestamp="2025-12-09 11:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:39:49.670178509 +0000 UTC m=+772.210062835" watchObservedRunningTime="2025-12-09 11:39:49.678220879 +0000 UTC m=+772.218105205" Dec 09 11:39:51 crc kubenswrapper[4849]: I1209 11:39:51.132515 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:39:51 crc kubenswrapper[4849]: I1209 11:39:51.132960 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:39:51 crc kubenswrapper[4849]: I1209 11:39:51.133010 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:39:51 crc kubenswrapper[4849]: I1209 11:39:51.134504 4849 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"048beac97f1401b80a7107cf946bd8ac882621de80936787f6987e142986bbe4"} pod="openshift-machine-config-operator/machine-config-daemon-89kpx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 11:39:51 crc kubenswrapper[4849]: I1209 11:39:51.134688 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" containerID="cri-o://048beac97f1401b80a7107cf946bd8ac882621de80936787f6987e142986bbe4" gracePeriod=600 Dec 09 11:39:51 crc kubenswrapper[4849]: I1209 11:39:51.681101 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mlpgl" event={"ID":"6f62a435-b00e-4eba-a243-91c18c9639e4","Type":"ContainerStarted","Data":"1db677e30d2355e6e3393784c1b106465491797b48aead9cd2d66f72f1bfbfc3"} Dec 09 11:39:51 crc kubenswrapper[4849]: I1209 11:39:51.681319 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:51 crc kubenswrapper[4849]: I1209 11:39:51.686070 4849 generic.go:334] "Generic (PLEG): container finished" podID="157c6f6c-042b-4da3-934e-a08474e56486" containerID="048beac97f1401b80a7107cf946bd8ac882621de80936787f6987e142986bbe4" exitCode=0 Dec 09 11:39:51 crc kubenswrapper[4849]: I1209 11:39:51.686129 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerDied","Data":"048beac97f1401b80a7107cf946bd8ac882621de80936787f6987e142986bbe4"} Dec 09 11:39:51 crc kubenswrapper[4849]: I1209 11:39:51.686165 4849 scope.go:117] "RemoveContainer" containerID="9bf575ce487faa87fad2e90da46de12216b3b9187fb59a7d04f81930ece3edc9" Dec 09 11:39:51 crc kubenswrapper[4849]: I1209 11:39:51.688010 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv" event={"ID":"e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518","Type":"ContainerStarted","Data":"946adbe83b72ba361cbf634b7e8126bc784275f040dd8af4a28102b10bc60aad"} Dec 09 11:39:51 crc kubenswrapper[4849]: I1209 11:39:51.688776 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv" Dec 09 11:39:51 crc kubenswrapper[4849]: I1209 11:39:51.690629 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4d6rl" event={"ID":"4412c89c-f551-4683-8682-8fc188bf086d","Type":"ContainerStarted","Data":"bfa424bbd70ee51d082cb6d9f987e17864794d9525c3103e134b97f0cfd958db"} Dec 09 11:39:51 crc kubenswrapper[4849]: I1209 11:39:51.697518 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-mlpgl" podStartSLOduration=1.38929181 podStartE2EDuration="3.697503208s" podCreationTimestamp="2025-12-09 11:39:48 +0000 UTC" firstStartedPulling="2025-12-09 11:39:48.428058915 +0000 UTC m=+770.967943241" lastFinishedPulling="2025-12-09 11:39:50.736270333 +0000 UTC m=+773.276154639" observedRunningTime="2025-12-09 11:39:51.695774955 +0000 UTC m=+774.235659271" watchObservedRunningTime="2025-12-09 11:39:51.697503208 +0000 UTC m=+774.237387524" Dec 09 11:39:51 crc kubenswrapper[4849]: I1209 11:39:51.716939 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv" podStartSLOduration=3.315323843 podStartE2EDuration="4.716917619s" podCreationTimestamp="2025-12-09 11:39:47 +0000 UTC" firstStartedPulling="2025-12-09 11:39:49.340307267 +0000 UTC m=+771.880191573" lastFinishedPulling="2025-12-09 11:39:50.741901033 +0000 UTC m=+773.281785349" observedRunningTime="2025-12-09 11:39:51.714748316 +0000 UTC m=+774.254632632" watchObservedRunningTime="2025-12-09 11:39:51.716917619 +0000 UTC m=+774.256801955" Dec 09 11:39:52 crc kubenswrapper[4849]: I1209 11:39:52.698799 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r" event={"ID":"3c6cd138-dbe0-4baf-a149-341d01905fc8","Type":"ContainerStarted","Data":"6f7c2561bc186e3571008b1171c51414ec6b4eafce854386079f2df104c2f38b"} Dec 09 11:39:52 crc kubenswrapper[4849]: I1209 11:39:52.700966 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerStarted","Data":"fb7e27f11d509caaa9ebc587327526354751d200d66bddbb3fd44be26e61d13f"} Dec 09 11:39:52 crc kubenswrapper[4849]: I1209 11:39:52.714105 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fsp9r" podStartSLOduration=1.964641924 podStartE2EDuration="4.714090924s" podCreationTimestamp="2025-12-09 11:39:48 +0000 UTC" firstStartedPulling="2025-12-09 11:39:48.98888561 +0000 UTC m=+771.528769926" lastFinishedPulling="2025-12-09 11:39:51.73833461 +0000 UTC m=+774.278218926" observedRunningTime="2025-12-09 11:39:52.714055574 +0000 UTC m=+775.253939910" watchObservedRunningTime="2025-12-09 11:39:52.714090924 +0000 UTC m=+775.253975240" Dec 09 11:39:53 crc kubenswrapper[4849]: I1209 11:39:53.719101 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4d6rl" event={"ID":"4412c89c-f551-4683-8682-8fc188bf086d","Type":"ContainerStarted","Data":"8134953b806f786f10a9f7ce87aea93f023594d4f8f3b715ba1c14f946e7ab17"} Dec 09 11:39:53 crc kubenswrapper[4849]: I1209 11:39:53.740123 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4d6rl" podStartSLOduration=2.055343406 podStartE2EDuration="6.740102915s" podCreationTimestamp="2025-12-09 11:39:47 +0000 UTC" firstStartedPulling="2025-12-09 11:39:48.690557799 +0000 UTC m=+771.230442115" lastFinishedPulling="2025-12-09 11:39:53.375317308 +0000 UTC m=+775.915201624" observedRunningTime="2025-12-09 11:39:53.739325686 +0000 UTC m=+776.279210032" watchObservedRunningTime="2025-12-09 11:39:53.740102915 +0000 UTC m=+776.279987241" Dec 09 11:39:58 crc kubenswrapper[4849]: I1209 11:39:58.405833 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-mlpgl" Dec 09 11:39:58 crc kubenswrapper[4849]: I1209 11:39:58.751824 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:58 crc kubenswrapper[4849]: I1209 11:39:58.751875 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:58 crc kubenswrapper[4849]: I1209 11:39:58.756811 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:59 crc kubenswrapper[4849]: I1209 11:39:59.759720 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-77c4f49446-f5928" Dec 09 11:39:59 crc kubenswrapper[4849]: I1209 11:39:59.823455 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-l6kz7"] Dec 09 11:40:08 crc kubenswrapper[4849]: I1209 11:40:08.934341 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-m2zmv" Dec 09 11:40:21 crc kubenswrapper[4849]: I1209 11:40:21.751065 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw"] Dec 09 11:40:21 crc kubenswrapper[4849]: I1209 11:40:21.752730 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" Dec 09 11:40:21 crc kubenswrapper[4849]: I1209 11:40:21.759000 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 11:40:21 crc kubenswrapper[4849]: I1209 11:40:21.773257 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw"] Dec 09 11:40:21 crc kubenswrapper[4849]: I1209 11:40:21.833563 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb93973-472b-4a08-ad39-4638fdbdf108-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw\" (UID: \"3eb93973-472b-4a08-ad39-4638fdbdf108\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" Dec 09 11:40:21 crc kubenswrapper[4849]: I1209 11:40:21.833627 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxnwh\" (UniqueName: \"kubernetes.io/projected/3eb93973-472b-4a08-ad39-4638fdbdf108-kube-api-access-gxnwh\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw\" (UID: \"3eb93973-472b-4a08-ad39-4638fdbdf108\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" Dec 09 11:40:21 crc kubenswrapper[4849]: I1209 11:40:21.833651 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb93973-472b-4a08-ad39-4638fdbdf108-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw\" (UID: \"3eb93973-472b-4a08-ad39-4638fdbdf108\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" Dec 09 11:40:21 crc kubenswrapper[4849]: I1209 11:40:21.934966 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb93973-472b-4a08-ad39-4638fdbdf108-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw\" (UID: \"3eb93973-472b-4a08-ad39-4638fdbdf108\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" Dec 09 11:40:21 crc kubenswrapper[4849]: I1209 11:40:21.935024 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxnwh\" (UniqueName: \"kubernetes.io/projected/3eb93973-472b-4a08-ad39-4638fdbdf108-kube-api-access-gxnwh\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw\" (UID: \"3eb93973-472b-4a08-ad39-4638fdbdf108\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" Dec 09 11:40:21 crc kubenswrapper[4849]: I1209 11:40:21.935054 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb93973-472b-4a08-ad39-4638fdbdf108-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw\" (UID: \"3eb93973-472b-4a08-ad39-4638fdbdf108\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" Dec 09 11:40:21 crc kubenswrapper[4849]: I1209 11:40:21.935572 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb93973-472b-4a08-ad39-4638fdbdf108-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw\" (UID: \"3eb93973-472b-4a08-ad39-4638fdbdf108\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" Dec 09 11:40:21 crc kubenswrapper[4849]: I1209 11:40:21.935872 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb93973-472b-4a08-ad39-4638fdbdf108-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw\" (UID: \"3eb93973-472b-4a08-ad39-4638fdbdf108\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" Dec 09 11:40:21 crc kubenswrapper[4849]: I1209 11:40:21.960563 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxnwh\" (UniqueName: \"kubernetes.io/projected/3eb93973-472b-4a08-ad39-4638fdbdf108-kube-api-access-gxnwh\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw\" (UID: \"3eb93973-472b-4a08-ad39-4638fdbdf108\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" Dec 09 11:40:22 crc kubenswrapper[4849]: I1209 11:40:22.067765 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" Dec 09 11:40:22 crc kubenswrapper[4849]: I1209 11:40:22.572199 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw"] Dec 09 11:40:23 crc kubenswrapper[4849]: I1209 11:40:23.111211 4849 generic.go:334] "Generic (PLEG): container finished" podID="3eb93973-472b-4a08-ad39-4638fdbdf108" containerID="bad5e2c3a4e2a0739ea6a10e1c45a0b3eb9b2dc2cf8af5af87c68d08c794af32" exitCode=0 Dec 09 11:40:23 crc kubenswrapper[4849]: I1209 11:40:23.111268 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" event={"ID":"3eb93973-472b-4a08-ad39-4638fdbdf108","Type":"ContainerDied","Data":"bad5e2c3a4e2a0739ea6a10e1c45a0b3eb9b2dc2cf8af5af87c68d08c794af32"} Dec 09 11:40:23 crc kubenswrapper[4849]: I1209 11:40:23.111301 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" event={"ID":"3eb93973-472b-4a08-ad39-4638fdbdf108","Type":"ContainerStarted","Data":"56c561e8bb17c8355514573c62f19b035c05e2cec1c53a7e8409a71c4c83fca1"} Dec 09 11:40:24 crc kubenswrapper[4849]: I1209 11:40:24.120304 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6lhtz"] Dec 09 11:40:24 crc kubenswrapper[4849]: I1209 11:40:24.121863 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:24 crc kubenswrapper[4849]: I1209 11:40:24.137911 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lhtz"] Dec 09 11:40:24 crc kubenswrapper[4849]: I1209 11:40:24.162124 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/158aedaa-82ff-4904-90da-a457f26a881f-utilities\") pod \"redhat-operators-6lhtz\" (UID: \"158aedaa-82ff-4904-90da-a457f26a881f\") " pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:24 crc kubenswrapper[4849]: I1209 11:40:24.162182 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/158aedaa-82ff-4904-90da-a457f26a881f-catalog-content\") pod \"redhat-operators-6lhtz\" (UID: \"158aedaa-82ff-4904-90da-a457f26a881f\") " pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:24 crc kubenswrapper[4849]: I1209 11:40:24.162250 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6mjj\" (UniqueName: \"kubernetes.io/projected/158aedaa-82ff-4904-90da-a457f26a881f-kube-api-access-b6mjj\") pod \"redhat-operators-6lhtz\" (UID: \"158aedaa-82ff-4904-90da-a457f26a881f\") " pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:24 crc kubenswrapper[4849]: I1209 11:40:24.263070 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/158aedaa-82ff-4904-90da-a457f26a881f-utilities\") pod \"redhat-operators-6lhtz\" (UID: \"158aedaa-82ff-4904-90da-a457f26a881f\") " pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:24 crc kubenswrapper[4849]: I1209 11:40:24.263128 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/158aedaa-82ff-4904-90da-a457f26a881f-catalog-content\") pod \"redhat-operators-6lhtz\" (UID: \"158aedaa-82ff-4904-90da-a457f26a881f\") " pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:24 crc kubenswrapper[4849]: I1209 11:40:24.263179 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6mjj\" (UniqueName: \"kubernetes.io/projected/158aedaa-82ff-4904-90da-a457f26a881f-kube-api-access-b6mjj\") pod \"redhat-operators-6lhtz\" (UID: \"158aedaa-82ff-4904-90da-a457f26a881f\") " pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:24 crc kubenswrapper[4849]: I1209 11:40:24.263987 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/158aedaa-82ff-4904-90da-a457f26a881f-utilities\") pod \"redhat-operators-6lhtz\" (UID: \"158aedaa-82ff-4904-90da-a457f26a881f\") " pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:24 crc kubenswrapper[4849]: I1209 11:40:24.264277 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/158aedaa-82ff-4904-90da-a457f26a881f-catalog-content\") pod \"redhat-operators-6lhtz\" (UID: \"158aedaa-82ff-4904-90da-a457f26a881f\") " pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:24 crc kubenswrapper[4849]: I1209 11:40:24.283032 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6mjj\" (UniqueName: \"kubernetes.io/projected/158aedaa-82ff-4904-90da-a457f26a881f-kube-api-access-b6mjj\") pod \"redhat-operators-6lhtz\" (UID: \"158aedaa-82ff-4904-90da-a457f26a881f\") " pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:24 crc kubenswrapper[4849]: I1209 11:40:24.456185 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:24 crc kubenswrapper[4849]: I1209 11:40:24.686912 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lhtz"] Dec 09 11:40:24 crc kubenswrapper[4849]: W1209 11:40:24.689566 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod158aedaa_82ff_4904_90da_a457f26a881f.slice/crio-db529137a2bdc021f2dea33c54c9bbe6b2884808565b70b38d9e00bfd1207edf WatchSource:0}: Error finding container db529137a2bdc021f2dea33c54c9bbe6b2884808565b70b38d9e00bfd1207edf: Status 404 returned error can't find the container with id db529137a2bdc021f2dea33c54c9bbe6b2884808565b70b38d9e00bfd1207edf Dec 09 11:40:24 crc kubenswrapper[4849]: I1209 11:40:24.870118 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-l6kz7" podUID="1e6507b4-4ff1-4fc1-afee-9e6c2e909908" containerName="console" containerID="cri-o://0fff653700a130e88adc6781b72a46075a34790483953906c68440a2d86f0e7d" gracePeriod=15 Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.127300 4849 generic.go:334] "Generic (PLEG): container finished" podID="158aedaa-82ff-4904-90da-a457f26a881f" containerID="bab02c6b1fefd33bc252b771fe2655a162a7c2801bb5d1bd94d3c2e67488d537" exitCode=0 Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.127367 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lhtz" event={"ID":"158aedaa-82ff-4904-90da-a457f26a881f","Type":"ContainerDied","Data":"bab02c6b1fefd33bc252b771fe2655a162a7c2801bb5d1bd94d3c2e67488d537"} Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.127393 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lhtz" event={"ID":"158aedaa-82ff-4904-90da-a457f26a881f","Type":"ContainerStarted","Data":"db529137a2bdc021f2dea33c54c9bbe6b2884808565b70b38d9e00bfd1207edf"} Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.129925 4849 generic.go:334] "Generic (PLEG): container finished" podID="3eb93973-472b-4a08-ad39-4638fdbdf108" containerID="01fb9671406becfa96b1d9734b41d9e3e602bbbc5f02b880f1d8bfa70cf46f48" exitCode=0 Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.129992 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" event={"ID":"3eb93973-472b-4a08-ad39-4638fdbdf108","Type":"ContainerDied","Data":"01fb9671406becfa96b1d9734b41d9e3e602bbbc5f02b880f1d8bfa70cf46f48"} Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.132933 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-l6kz7_1e6507b4-4ff1-4fc1-afee-9e6c2e909908/console/0.log" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.132966 4849 generic.go:334] "Generic (PLEG): container finished" podID="1e6507b4-4ff1-4fc1-afee-9e6c2e909908" containerID="0fff653700a130e88adc6781b72a46075a34790483953906c68440a2d86f0e7d" exitCode=2 Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.132987 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l6kz7" event={"ID":"1e6507b4-4ff1-4fc1-afee-9e6c2e909908","Type":"ContainerDied","Data":"0fff653700a130e88adc6781b72a46075a34790483953906c68440a2d86f0e7d"} Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.524627 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-l6kz7_1e6507b4-4ff1-4fc1-afee-9e6c2e909908/console/0.log" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.524901 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.687905 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-serving-cert\") pod \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.687955 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-config\") pod \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.688002 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-service-ca\") pod \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.688044 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh86z\" (UniqueName: \"kubernetes.io/projected/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-kube-api-access-lh86z\") pod \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.688100 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-oauth-serving-cert\") pod \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.688121 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-trusted-ca-bundle\") pod \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.688144 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-oauth-config\") pod \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\" (UID: \"1e6507b4-4ff1-4fc1-afee-9e6c2e909908\") " Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.689306 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-config" (OuterVolumeSpecName: "console-config") pod "1e6507b4-4ff1-4fc1-afee-9e6c2e909908" (UID: "1e6507b4-4ff1-4fc1-afee-9e6c2e909908"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.689345 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1e6507b4-4ff1-4fc1-afee-9e6c2e909908" (UID: "1e6507b4-4ff1-4fc1-afee-9e6c2e909908"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.689388 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-service-ca" (OuterVolumeSpecName: "service-ca") pod "1e6507b4-4ff1-4fc1-afee-9e6c2e909908" (UID: "1e6507b4-4ff1-4fc1-afee-9e6c2e909908"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.689934 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1e6507b4-4ff1-4fc1-afee-9e6c2e909908" (UID: "1e6507b4-4ff1-4fc1-afee-9e6c2e909908"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.698586 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1e6507b4-4ff1-4fc1-afee-9e6c2e909908" (UID: "1e6507b4-4ff1-4fc1-afee-9e6c2e909908"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.698798 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-kube-api-access-lh86z" (OuterVolumeSpecName: "kube-api-access-lh86z") pod "1e6507b4-4ff1-4fc1-afee-9e6c2e909908" (UID: "1e6507b4-4ff1-4fc1-afee-9e6c2e909908"). InnerVolumeSpecName "kube-api-access-lh86z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.704012 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1e6507b4-4ff1-4fc1-afee-9e6c2e909908" (UID: "1e6507b4-4ff1-4fc1-afee-9e6c2e909908"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.789454 4849 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.789491 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh86z\" (UniqueName: \"kubernetes.io/projected/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-kube-api-access-lh86z\") on node \"crc\" DevicePath \"\"" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.789504 4849 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.789516 4849 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.789526 4849 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.789536 4849 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 11:40:25 crc kubenswrapper[4849]: I1209 11:40:25.789545 4849 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e6507b4-4ff1-4fc1-afee-9e6c2e909908-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:40:26 crc kubenswrapper[4849]: I1209 11:40:26.140736 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-l6kz7_1e6507b4-4ff1-4fc1-afee-9e6c2e909908/console/0.log" Dec 09 11:40:26 crc kubenswrapper[4849]: I1209 11:40:26.140869 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l6kz7" Dec 09 11:40:26 crc kubenswrapper[4849]: I1209 11:40:26.141145 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l6kz7" event={"ID":"1e6507b4-4ff1-4fc1-afee-9e6c2e909908","Type":"ContainerDied","Data":"2c9187625b602248c950ecc24956860f43f7dbfd97b1d626f5118b74e918c273"} Dec 09 11:40:26 crc kubenswrapper[4849]: I1209 11:40:26.141214 4849 scope.go:117] "RemoveContainer" containerID="0fff653700a130e88adc6781b72a46075a34790483953906c68440a2d86f0e7d" Dec 09 11:40:26 crc kubenswrapper[4849]: I1209 11:40:26.154175 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lhtz" event={"ID":"158aedaa-82ff-4904-90da-a457f26a881f","Type":"ContainerStarted","Data":"cb5e22250f53b464885ac6092a720ab51f6c41a7e98fd67e8b9959b4251efd04"} Dec 09 11:40:26 crc kubenswrapper[4849]: I1209 11:40:26.158788 4849 generic.go:334] "Generic (PLEG): container finished" podID="3eb93973-472b-4a08-ad39-4638fdbdf108" containerID="51ae23d245f6984319236801581b833437eca5bcb833592b190a5af1c155018b" exitCode=0 Dec 09 11:40:26 crc kubenswrapper[4849]: I1209 11:40:26.159008 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" event={"ID":"3eb93973-472b-4a08-ad39-4638fdbdf108","Type":"ContainerDied","Data":"51ae23d245f6984319236801581b833437eca5bcb833592b190a5af1c155018b"} Dec 09 11:40:26 crc kubenswrapper[4849]: I1209 11:40:26.265620 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-l6kz7"] Dec 09 11:40:26 crc kubenswrapper[4849]: I1209 11:40:26.269999 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-l6kz7"] Dec 09 11:40:26 crc kubenswrapper[4849]: I1209 11:40:26.585211 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e6507b4-4ff1-4fc1-afee-9e6c2e909908" path="/var/lib/kubelet/pods/1e6507b4-4ff1-4fc1-afee-9e6c2e909908/volumes" Dec 09 11:40:27 crc kubenswrapper[4849]: I1209 11:40:27.932668 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" Dec 09 11:40:28 crc kubenswrapper[4849]: I1209 11:40:28.078484 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb93973-472b-4a08-ad39-4638fdbdf108-util\") pod \"3eb93973-472b-4a08-ad39-4638fdbdf108\" (UID: \"3eb93973-472b-4a08-ad39-4638fdbdf108\") " Dec 09 11:40:28 crc kubenswrapper[4849]: I1209 11:40:28.078598 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb93973-472b-4a08-ad39-4638fdbdf108-bundle\") pod \"3eb93973-472b-4a08-ad39-4638fdbdf108\" (UID: \"3eb93973-472b-4a08-ad39-4638fdbdf108\") " Dec 09 11:40:28 crc kubenswrapper[4849]: I1209 11:40:28.078680 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxnwh\" (UniqueName: \"kubernetes.io/projected/3eb93973-472b-4a08-ad39-4638fdbdf108-kube-api-access-gxnwh\") pod \"3eb93973-472b-4a08-ad39-4638fdbdf108\" (UID: \"3eb93973-472b-4a08-ad39-4638fdbdf108\") " Dec 09 11:40:28 crc kubenswrapper[4849]: I1209 11:40:28.079573 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb93973-472b-4a08-ad39-4638fdbdf108-bundle" (OuterVolumeSpecName: "bundle") pod "3eb93973-472b-4a08-ad39-4638fdbdf108" (UID: "3eb93973-472b-4a08-ad39-4638fdbdf108"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:40:28 crc kubenswrapper[4849]: I1209 11:40:28.091188 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb93973-472b-4a08-ad39-4638fdbdf108-util" (OuterVolumeSpecName: "util") pod "3eb93973-472b-4a08-ad39-4638fdbdf108" (UID: "3eb93973-472b-4a08-ad39-4638fdbdf108"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:40:28 crc kubenswrapper[4849]: I1209 11:40:28.097815 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb93973-472b-4a08-ad39-4638fdbdf108-kube-api-access-gxnwh" (OuterVolumeSpecName: "kube-api-access-gxnwh") pod "3eb93973-472b-4a08-ad39-4638fdbdf108" (UID: "3eb93973-472b-4a08-ad39-4638fdbdf108"). InnerVolumeSpecName "kube-api-access-gxnwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:40:28 crc kubenswrapper[4849]: I1209 11:40:28.172373 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" event={"ID":"3eb93973-472b-4a08-ad39-4638fdbdf108","Type":"ContainerDied","Data":"56c561e8bb17c8355514573c62f19b035c05e2cec1c53a7e8409a71c4c83fca1"} Dec 09 11:40:28 crc kubenswrapper[4849]: I1209 11:40:28.172712 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56c561e8bb17c8355514573c62f19b035c05e2cec1c53a7e8409a71c4c83fca1" Dec 09 11:40:28 crc kubenswrapper[4849]: I1209 11:40:28.172437 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw" Dec 09 11:40:28 crc kubenswrapper[4849]: I1209 11:40:28.179885 4849 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb93973-472b-4a08-ad39-4638fdbdf108-util\") on node \"crc\" DevicePath \"\"" Dec 09 11:40:28 crc kubenswrapper[4849]: I1209 11:40:28.179909 4849 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb93973-472b-4a08-ad39-4638fdbdf108-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:40:28 crc kubenswrapper[4849]: I1209 11:40:28.179919 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxnwh\" (UniqueName: \"kubernetes.io/projected/3eb93973-472b-4a08-ad39-4638fdbdf108-kube-api-access-gxnwh\") on node \"crc\" DevicePath \"\"" Dec 09 11:40:30 crc kubenswrapper[4849]: I1209 11:40:30.187947 4849 generic.go:334] "Generic (PLEG): container finished" podID="158aedaa-82ff-4904-90da-a457f26a881f" containerID="cb5e22250f53b464885ac6092a720ab51f6c41a7e98fd67e8b9959b4251efd04" exitCode=0 Dec 09 11:40:30 crc kubenswrapper[4849]: I1209 11:40:30.187997 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lhtz" event={"ID":"158aedaa-82ff-4904-90da-a457f26a881f","Type":"ContainerDied","Data":"cb5e22250f53b464885ac6092a720ab51f6c41a7e98fd67e8b9959b4251efd04"} Dec 09 11:40:31 crc kubenswrapper[4849]: I1209 11:40:31.195336 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lhtz" event={"ID":"158aedaa-82ff-4904-90da-a457f26a881f","Type":"ContainerStarted","Data":"00fd60d14394a25137adb6187d40148191b5cc359dfc7761515a070c939d649b"} Dec 09 11:40:31 crc kubenswrapper[4849]: I1209 11:40:31.221534 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6lhtz" podStartSLOduration=1.706316446 podStartE2EDuration="7.221520193s" podCreationTimestamp="2025-12-09 11:40:24 +0000 UTC" firstStartedPulling="2025-12-09 11:40:25.128654803 +0000 UTC m=+807.668539119" lastFinishedPulling="2025-12-09 11:40:30.64385855 +0000 UTC m=+813.183742866" observedRunningTime="2025-12-09 11:40:31.220304162 +0000 UTC m=+813.760188478" watchObservedRunningTime="2025-12-09 11:40:31.221520193 +0000 UTC m=+813.761404509" Dec 09 11:40:34 crc kubenswrapper[4849]: I1209 11:40:34.457091 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:34 crc kubenswrapper[4849]: I1209 11:40:34.457993 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:35 crc kubenswrapper[4849]: I1209 11:40:35.495329 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6lhtz" podUID="158aedaa-82ff-4904-90da-a457f26a881f" containerName="registry-server" probeResult="failure" output=< Dec 09 11:40:35 crc kubenswrapper[4849]: timeout: failed to connect service ":50051" within 1s Dec 09 11:40:35 crc kubenswrapper[4849]: > Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.451317 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt"] Dec 09 11:40:37 crc kubenswrapper[4849]: E1209 11:40:37.451676 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb93973-472b-4a08-ad39-4638fdbdf108" containerName="pull" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.451695 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb93973-472b-4a08-ad39-4638fdbdf108" containerName="pull" Dec 09 11:40:37 crc kubenswrapper[4849]: E1209 11:40:37.451710 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6507b4-4ff1-4fc1-afee-9e6c2e909908" containerName="console" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.451717 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6507b4-4ff1-4fc1-afee-9e6c2e909908" containerName="console" Dec 09 11:40:37 crc kubenswrapper[4849]: E1209 11:40:37.451727 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb93973-472b-4a08-ad39-4638fdbdf108" containerName="extract" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.451734 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb93973-472b-4a08-ad39-4638fdbdf108" containerName="extract" Dec 09 11:40:37 crc kubenswrapper[4849]: E1209 11:40:37.451755 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb93973-472b-4a08-ad39-4638fdbdf108" containerName="util" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.451761 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb93973-472b-4a08-ad39-4638fdbdf108" containerName="util" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.451866 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb93973-472b-4a08-ad39-4638fdbdf108" containerName="extract" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.451881 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6507b4-4ff1-4fc1-afee-9e6c2e909908" containerName="console" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.452503 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.455233 4849 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.455668 4849 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.462346 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.471738 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.472781 4849 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-ft598" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.482561 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt"] Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.574590 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19796ce6-f4e9-451a-ba5a-85624de86e77-apiservice-cert\") pod \"metallb-operator-controller-manager-9db7cfdf8-7vdzt\" (UID: \"19796ce6-f4e9-451a-ba5a-85624de86e77\") " pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.574673 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8ll7\" (UniqueName: \"kubernetes.io/projected/19796ce6-f4e9-451a-ba5a-85624de86e77-kube-api-access-r8ll7\") pod \"metallb-operator-controller-manager-9db7cfdf8-7vdzt\" (UID: \"19796ce6-f4e9-451a-ba5a-85624de86e77\") " pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.574721 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19796ce6-f4e9-451a-ba5a-85624de86e77-webhook-cert\") pod \"metallb-operator-controller-manager-9db7cfdf8-7vdzt\" (UID: \"19796ce6-f4e9-451a-ba5a-85624de86e77\") " pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.676751 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19796ce6-f4e9-451a-ba5a-85624de86e77-apiservice-cert\") pod \"metallb-operator-controller-manager-9db7cfdf8-7vdzt\" (UID: \"19796ce6-f4e9-451a-ba5a-85624de86e77\") " pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.676802 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8ll7\" (UniqueName: \"kubernetes.io/projected/19796ce6-f4e9-451a-ba5a-85624de86e77-kube-api-access-r8ll7\") pod \"metallb-operator-controller-manager-9db7cfdf8-7vdzt\" (UID: \"19796ce6-f4e9-451a-ba5a-85624de86e77\") " pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.676841 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19796ce6-f4e9-451a-ba5a-85624de86e77-webhook-cert\") pod \"metallb-operator-controller-manager-9db7cfdf8-7vdzt\" (UID: \"19796ce6-f4e9-451a-ba5a-85624de86e77\") " pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.682972 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19796ce6-f4e9-451a-ba5a-85624de86e77-apiservice-cert\") pod \"metallb-operator-controller-manager-9db7cfdf8-7vdzt\" (UID: \"19796ce6-f4e9-451a-ba5a-85624de86e77\") " pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.692748 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19796ce6-f4e9-451a-ba5a-85624de86e77-webhook-cert\") pod \"metallb-operator-controller-manager-9db7cfdf8-7vdzt\" (UID: \"19796ce6-f4e9-451a-ba5a-85624de86e77\") " pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.696002 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8ll7\" (UniqueName: \"kubernetes.io/projected/19796ce6-f4e9-451a-ba5a-85624de86e77-kube-api-access-r8ll7\") pod \"metallb-operator-controller-manager-9db7cfdf8-7vdzt\" (UID: \"19796ce6-f4e9-451a-ba5a-85624de86e77\") " pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.771781 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.784685 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg"] Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.785453 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.792673 4849 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.792880 4849 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.793081 4849 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-fn57r" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.874810 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg"] Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.878861 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbgp2\" (UniqueName: \"kubernetes.io/projected/1c26adb0-81b9-4722-b799-4cc66c301025-kube-api-access-nbgp2\") pod \"metallb-operator-webhook-server-655d65f479-n7rjg\" (UID: \"1c26adb0-81b9-4722-b799-4cc66c301025\") " pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.878913 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c26adb0-81b9-4722-b799-4cc66c301025-webhook-cert\") pod \"metallb-operator-webhook-server-655d65f479-n7rjg\" (UID: \"1c26adb0-81b9-4722-b799-4cc66c301025\") " pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.878940 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c26adb0-81b9-4722-b799-4cc66c301025-apiservice-cert\") pod \"metallb-operator-webhook-server-655d65f479-n7rjg\" (UID: \"1c26adb0-81b9-4722-b799-4cc66c301025\") " pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.980570 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbgp2\" (UniqueName: \"kubernetes.io/projected/1c26adb0-81b9-4722-b799-4cc66c301025-kube-api-access-nbgp2\") pod \"metallb-operator-webhook-server-655d65f479-n7rjg\" (UID: \"1c26adb0-81b9-4722-b799-4cc66c301025\") " pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.980623 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c26adb0-81b9-4722-b799-4cc66c301025-webhook-cert\") pod \"metallb-operator-webhook-server-655d65f479-n7rjg\" (UID: \"1c26adb0-81b9-4722-b799-4cc66c301025\") " pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.980646 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c26adb0-81b9-4722-b799-4cc66c301025-apiservice-cert\") pod \"metallb-operator-webhook-server-655d65f479-n7rjg\" (UID: \"1c26adb0-81b9-4722-b799-4cc66c301025\") " pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.983727 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c26adb0-81b9-4722-b799-4cc66c301025-apiservice-cert\") pod \"metallb-operator-webhook-server-655d65f479-n7rjg\" (UID: \"1c26adb0-81b9-4722-b799-4cc66c301025\") " pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.986219 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c26adb0-81b9-4722-b799-4cc66c301025-webhook-cert\") pod \"metallb-operator-webhook-server-655d65f479-n7rjg\" (UID: \"1c26adb0-81b9-4722-b799-4cc66c301025\") " pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" Dec 09 11:40:37 crc kubenswrapper[4849]: I1209 11:40:37.998183 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbgp2\" (UniqueName: \"kubernetes.io/projected/1c26adb0-81b9-4722-b799-4cc66c301025-kube-api-access-nbgp2\") pod \"metallb-operator-webhook-server-655d65f479-n7rjg\" (UID: \"1c26adb0-81b9-4722-b799-4cc66c301025\") " pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" Dec 09 11:40:38 crc kubenswrapper[4849]: I1209 11:40:38.184609 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" Dec 09 11:40:38 crc kubenswrapper[4849]: I1209 11:40:38.192689 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt"] Dec 09 11:40:38 crc kubenswrapper[4849]: I1209 11:40:38.254769 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" event={"ID":"19796ce6-f4e9-451a-ba5a-85624de86e77","Type":"ContainerStarted","Data":"daf079bcd6cbba4dd76e6667814e6a77d13605445b3884ccfd0baa73730a80a7"} Dec 09 11:40:38 crc kubenswrapper[4849]: I1209 11:40:38.739550 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg"] Dec 09 11:40:39 crc kubenswrapper[4849]: I1209 11:40:39.279095 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" event={"ID":"1c26adb0-81b9-4722-b799-4cc66c301025","Type":"ContainerStarted","Data":"243a45744d0516ba3cafce1680770cac831002b7d76f4cd850c82bc76331aba4"} Dec 09 11:40:44 crc kubenswrapper[4849]: I1209 11:40:44.514237 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:44 crc kubenswrapper[4849]: I1209 11:40:44.572480 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:46 crc kubenswrapper[4849]: I1209 11:40:46.507835 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lhtz"] Dec 09 11:40:46 crc kubenswrapper[4849]: I1209 11:40:46.508035 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6lhtz" podUID="158aedaa-82ff-4904-90da-a457f26a881f" containerName="registry-server" containerID="cri-o://00fd60d14394a25137adb6187d40148191b5cc359dfc7761515a070c939d649b" gracePeriod=2 Dec 09 11:40:47 crc kubenswrapper[4849]: I1209 11:40:47.510522 4849 generic.go:334] "Generic (PLEG): container finished" podID="158aedaa-82ff-4904-90da-a457f26a881f" containerID="00fd60d14394a25137adb6187d40148191b5cc359dfc7761515a070c939d649b" exitCode=0 Dec 09 11:40:47 crc kubenswrapper[4849]: I1209 11:40:47.510578 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lhtz" event={"ID":"158aedaa-82ff-4904-90da-a457f26a881f","Type":"ContainerDied","Data":"00fd60d14394a25137adb6187d40148191b5cc359dfc7761515a070c939d649b"} Dec 09 11:40:47 crc kubenswrapper[4849]: I1209 11:40:47.810073 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.006528 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/158aedaa-82ff-4904-90da-a457f26a881f-utilities\") pod \"158aedaa-82ff-4904-90da-a457f26a881f\" (UID: \"158aedaa-82ff-4904-90da-a457f26a881f\") " Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.006595 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6mjj\" (UniqueName: \"kubernetes.io/projected/158aedaa-82ff-4904-90da-a457f26a881f-kube-api-access-b6mjj\") pod \"158aedaa-82ff-4904-90da-a457f26a881f\" (UID: \"158aedaa-82ff-4904-90da-a457f26a881f\") " Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.006646 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/158aedaa-82ff-4904-90da-a457f26a881f-catalog-content\") pod \"158aedaa-82ff-4904-90da-a457f26a881f\" (UID: \"158aedaa-82ff-4904-90da-a457f26a881f\") " Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.007836 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/158aedaa-82ff-4904-90da-a457f26a881f-utilities" (OuterVolumeSpecName: "utilities") pod "158aedaa-82ff-4904-90da-a457f26a881f" (UID: "158aedaa-82ff-4904-90da-a457f26a881f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.013571 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/158aedaa-82ff-4904-90da-a457f26a881f-kube-api-access-b6mjj" (OuterVolumeSpecName: "kube-api-access-b6mjj") pod "158aedaa-82ff-4904-90da-a457f26a881f" (UID: "158aedaa-82ff-4904-90da-a457f26a881f"). InnerVolumeSpecName "kube-api-access-b6mjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.107539 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/158aedaa-82ff-4904-90da-a457f26a881f-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.107578 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6mjj\" (UniqueName: \"kubernetes.io/projected/158aedaa-82ff-4904-90da-a457f26a881f-kube-api-access-b6mjj\") on node \"crc\" DevicePath \"\"" Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.134322 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/158aedaa-82ff-4904-90da-a457f26a881f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "158aedaa-82ff-4904-90da-a457f26a881f" (UID: "158aedaa-82ff-4904-90da-a457f26a881f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.209027 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/158aedaa-82ff-4904-90da-a457f26a881f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.520967 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lhtz" event={"ID":"158aedaa-82ff-4904-90da-a457f26a881f","Type":"ContainerDied","Data":"db529137a2bdc021f2dea33c54c9bbe6b2884808565b70b38d9e00bfd1207edf"} Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.521776 4849 scope.go:117] "RemoveContainer" containerID="00fd60d14394a25137adb6187d40148191b5cc359dfc7761515a070c939d649b" Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.521223 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lhtz" Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.526094 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" event={"ID":"19796ce6-f4e9-451a-ba5a-85624de86e77","Type":"ContainerStarted","Data":"f07af296eca1c4a8f7e7922db5a215e42d7c2e37a857077587bd515973f99b90"} Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.526681 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.527851 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" event={"ID":"1c26adb0-81b9-4722-b799-4cc66c301025","Type":"ContainerStarted","Data":"ca5e5791e6081da2d17a33ea24de6d17385e5b1a6a9692bc6327042233d9621d"} Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.528104 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.559782 4849 scope.go:117] "RemoveContainer" containerID="cb5e22250f53b464885ac6092a720ab51f6c41a7e98fd67e8b9959b4251efd04" Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.584877 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" podStartSLOduration=2.256910658 podStartE2EDuration="11.58485393s" podCreationTimestamp="2025-12-09 11:40:37 +0000 UTC" firstStartedPulling="2025-12-09 11:40:38.211318151 +0000 UTC m=+820.751202467" lastFinishedPulling="2025-12-09 11:40:47.539261423 +0000 UTC m=+830.079145739" observedRunningTime="2025-12-09 11:40:48.577715571 +0000 UTC m=+831.117599907" watchObservedRunningTime="2025-12-09 11:40:48.58485393 +0000 UTC m=+831.124738256" Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.588501 4849 scope.go:117] "RemoveContainer" containerID="bab02c6b1fefd33bc252b771fe2655a162a7c2801bb5d1bd94d3c2e67488d537" Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.609310 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lhtz"] Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.616163 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6lhtz"] Dec 09 11:40:48 crc kubenswrapper[4849]: I1209 11:40:48.653969 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" podStartSLOduration=2.814055225 podStartE2EDuration="11.653945097s" podCreationTimestamp="2025-12-09 11:40:37 +0000 UTC" firstStartedPulling="2025-12-09 11:40:38.748044775 +0000 UTC m=+821.287929081" lastFinishedPulling="2025-12-09 11:40:47.587934647 +0000 UTC m=+830.127818953" observedRunningTime="2025-12-09 11:40:48.653192488 +0000 UTC m=+831.193076834" watchObservedRunningTime="2025-12-09 11:40:48.653945097 +0000 UTC m=+831.193829433" Dec 09 11:40:50 crc kubenswrapper[4849]: I1209 11:40:50.543338 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="158aedaa-82ff-4904-90da-a457f26a881f" path="/var/lib/kubelet/pods/158aedaa-82ff-4904-90da-a457f26a881f/volumes" Dec 09 11:40:58 crc kubenswrapper[4849]: I1209 11:40:58.189452 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-655d65f479-n7rjg" Dec 09 11:41:17 crc kubenswrapper[4849]: I1209 11:41:17.780582 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-9db7cfdf8-7vdzt" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.730992 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-k6bpg"] Dec 09 11:41:18 crc kubenswrapper[4849]: E1209 11:41:18.731835 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158aedaa-82ff-4904-90da-a457f26a881f" containerName="extract-content" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.731857 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="158aedaa-82ff-4904-90da-a457f26a881f" containerName="extract-content" Dec 09 11:41:18 crc kubenswrapper[4849]: E1209 11:41:18.731870 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158aedaa-82ff-4904-90da-a457f26a881f" containerName="registry-server" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.731881 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="158aedaa-82ff-4904-90da-a457f26a881f" containerName="registry-server" Dec 09 11:41:18 crc kubenswrapper[4849]: E1209 11:41:18.731906 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158aedaa-82ff-4904-90da-a457f26a881f" containerName="extract-utilities" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.731913 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="158aedaa-82ff-4904-90da-a457f26a881f" containerName="extract-utilities" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.732049 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="158aedaa-82ff-4904-90da-a457f26a881f" containerName="registry-server" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.734696 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.738469 4849 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.738731 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.738889 4849 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vrwwj" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.749841 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl"] Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.750672 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.754043 4849 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.788787 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl"] Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.876079 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-lxwrr"] Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.885304 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lxwrr" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.886696 4849 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.888577 4849 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.888690 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.889444 4849 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-c5kp6" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.893374 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-mdbqt"] Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.894667 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-mdbqt" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.894904 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-frr-startup\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.894932 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-frr-sockets\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.894979 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-metrics-certs\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.895000 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-frr-conf\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.895055 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-metrics\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.895275 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cdgm\" (UniqueName: \"kubernetes.io/projected/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-kube-api-access-4cdgm\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.895485 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-reloader\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.895572 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a39fe675-ad51-4758-a2f3-b911b8a9f5fd-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-bsgvl\" (UID: \"a39fe675-ad51-4758-a2f3-b911b8a9f5fd\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.895610 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qcvk\" (UniqueName: \"kubernetes.io/projected/a39fe675-ad51-4758-a2f3-b911b8a9f5fd-kube-api-access-7qcvk\") pod \"frr-k8s-webhook-server-7fcb986d4-bsgvl\" (UID: \"a39fe675-ad51-4758-a2f3-b911b8a9f5fd\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.898921 4849 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.927511 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-mdbqt"] Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.996615 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-metrics-certs\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.996927 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhmcm\" (UniqueName: \"kubernetes.io/projected/0470a171-1894-4d83-b3d3-aae6580ef2e1-kube-api-access-mhmcm\") pod \"controller-f8648f98b-mdbqt\" (UID: \"0470a171-1894-4d83-b3d3-aae6580ef2e1\") " pod="metallb-system/controller-f8648f98b-mdbqt" Dec 09 11:41:18 crc kubenswrapper[4849]: E1209 11:41:18.996801 4849 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Dec 09 11:41:18 crc kubenswrapper[4849]: E1209 11:41:18.997098 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-metrics-certs podName:7f4f8e75-d158-487b-872b-4cfa2cb0b98b nodeName:}" failed. No retries permitted until 2025-12-09 11:41:19.497076699 +0000 UTC m=+862.036961095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-metrics-certs") pod "frr-k8s-k6bpg" (UID: "7f4f8e75-d158-487b-872b-4cfa2cb0b98b") : secret "frr-k8s-certs-secret" not found Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997033 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-metrics-certs\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997169 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-frr-conf\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997228 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-metrics\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997272 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cdgm\" (UniqueName: \"kubernetes.io/projected/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-kube-api-access-4cdgm\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997304 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0470a171-1894-4d83-b3d3-aae6580ef2e1-cert\") pod \"controller-f8648f98b-mdbqt\" (UID: \"0470a171-1894-4d83-b3d3-aae6580ef2e1\") " pod="metallb-system/controller-f8648f98b-mdbqt" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997323 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gfp9\" (UniqueName: \"kubernetes.io/projected/c3a2373d-8193-43ee-b1de-003115ad48f6-kube-api-access-7gfp9\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997365 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c3a2373d-8193-43ee-b1de-003115ad48f6-metallb-excludel2\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997440 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-reloader\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997474 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a39fe675-ad51-4758-a2f3-b911b8a9f5fd-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-bsgvl\" (UID: \"a39fe675-ad51-4758-a2f3-b911b8a9f5fd\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997493 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qcvk\" (UniqueName: \"kubernetes.io/projected/a39fe675-ad51-4758-a2f3-b911b8a9f5fd-kube-api-access-7qcvk\") pod \"frr-k8s-webhook-server-7fcb986d4-bsgvl\" (UID: \"a39fe675-ad51-4758-a2f3-b911b8a9f5fd\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997515 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-memberlist\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997539 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0470a171-1894-4d83-b3d3-aae6580ef2e1-metrics-certs\") pod \"controller-f8648f98b-mdbqt\" (UID: \"0470a171-1894-4d83-b3d3-aae6580ef2e1\") " pod="metallb-system/controller-f8648f98b-mdbqt" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997574 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-frr-startup\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997612 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-frr-sockets\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997725 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-metrics\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997904 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-frr-sockets\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.997955 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-frr-conf\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: E1209 11:41:18.997985 4849 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 09 11:41:18 crc kubenswrapper[4849]: E1209 11:41:18.998018 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a39fe675-ad51-4758-a2f3-b911b8a9f5fd-cert podName:a39fe675-ad51-4758-a2f3-b911b8a9f5fd nodeName:}" failed. No retries permitted until 2025-12-09 11:41:19.498007883 +0000 UTC m=+862.037892199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a39fe675-ad51-4758-a2f3-b911b8a9f5fd-cert") pod "frr-k8s-webhook-server-7fcb986d4-bsgvl" (UID: "a39fe675-ad51-4758-a2f3-b911b8a9f5fd") : secret "frr-k8s-webhook-server-cert" not found Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.998369 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-reloader\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:18 crc kubenswrapper[4849]: I1209 11:41:18.998865 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-frr-startup\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.019541 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cdgm\" (UniqueName: \"kubernetes.io/projected/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-kube-api-access-4cdgm\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.026782 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qcvk\" (UniqueName: \"kubernetes.io/projected/a39fe675-ad51-4758-a2f3-b911b8a9f5fd-kube-api-access-7qcvk\") pod \"frr-k8s-webhook-server-7fcb986d4-bsgvl\" (UID: \"a39fe675-ad51-4758-a2f3-b911b8a9f5fd\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.099036 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-memberlist\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.099086 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0470a171-1894-4d83-b3d3-aae6580ef2e1-metrics-certs\") pod \"controller-f8648f98b-mdbqt\" (UID: \"0470a171-1894-4d83-b3d3-aae6580ef2e1\") " pod="metallb-system/controller-f8648f98b-mdbqt" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.099173 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhmcm\" (UniqueName: \"kubernetes.io/projected/0470a171-1894-4d83-b3d3-aae6580ef2e1-kube-api-access-mhmcm\") pod \"controller-f8648f98b-mdbqt\" (UID: \"0470a171-1894-4d83-b3d3-aae6580ef2e1\") " pod="metallb-system/controller-f8648f98b-mdbqt" Dec 09 11:41:19 crc kubenswrapper[4849]: E1209 11:41:19.099225 4849 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 09 11:41:19 crc kubenswrapper[4849]: E1209 11:41:19.099287 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-memberlist podName:c3a2373d-8193-43ee-b1de-003115ad48f6 nodeName:}" failed. No retries permitted until 2025-12-09 11:41:19.599268488 +0000 UTC m=+862.139152814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-memberlist") pod "speaker-lxwrr" (UID: "c3a2373d-8193-43ee-b1de-003115ad48f6") : secret "metallb-memberlist" not found Dec 09 11:41:19 crc kubenswrapper[4849]: E1209 11:41:19.099338 4849 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 09 11:41:19 crc kubenswrapper[4849]: E1209 11:41:19.099373 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0470a171-1894-4d83-b3d3-aae6580ef2e1-metrics-certs podName:0470a171-1894-4d83-b3d3-aae6580ef2e1 nodeName:}" failed. No retries permitted until 2025-12-09 11:41:19.5993622 +0000 UTC m=+862.139246516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0470a171-1894-4d83-b3d3-aae6580ef2e1-metrics-certs") pod "controller-f8648f98b-mdbqt" (UID: "0470a171-1894-4d83-b3d3-aae6580ef2e1") : secret "controller-certs-secret" not found Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.099492 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-metrics-certs\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.099555 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0470a171-1894-4d83-b3d3-aae6580ef2e1-cert\") pod \"controller-f8648f98b-mdbqt\" (UID: \"0470a171-1894-4d83-b3d3-aae6580ef2e1\") " pod="metallb-system/controller-f8648f98b-mdbqt" Dec 09 11:41:19 crc kubenswrapper[4849]: E1209 11:41:19.099574 4849 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 09 11:41:19 crc kubenswrapper[4849]: E1209 11:41:19.099617 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-metrics-certs podName:c3a2373d-8193-43ee-b1de-003115ad48f6 nodeName:}" failed. No retries permitted until 2025-12-09 11:41:19.599604616 +0000 UTC m=+862.139488932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-metrics-certs") pod "speaker-lxwrr" (UID: "c3a2373d-8193-43ee-b1de-003115ad48f6") : secret "speaker-certs-secret" not found Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.099580 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gfp9\" (UniqueName: \"kubernetes.io/projected/c3a2373d-8193-43ee-b1de-003115ad48f6-kube-api-access-7gfp9\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.099671 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c3a2373d-8193-43ee-b1de-003115ad48f6-metallb-excludel2\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.100514 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c3a2373d-8193-43ee-b1de-003115ad48f6-metallb-excludel2\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.101337 4849 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.116694 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0470a171-1894-4d83-b3d3-aae6580ef2e1-cert\") pod \"controller-f8648f98b-mdbqt\" (UID: \"0470a171-1894-4d83-b3d3-aae6580ef2e1\") " pod="metallb-system/controller-f8648f98b-mdbqt" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.118800 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gfp9\" (UniqueName: \"kubernetes.io/projected/c3a2373d-8193-43ee-b1de-003115ad48f6-kube-api-access-7gfp9\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.126197 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhmcm\" (UniqueName: \"kubernetes.io/projected/0470a171-1894-4d83-b3d3-aae6580ef2e1-kube-api-access-mhmcm\") pod \"controller-f8648f98b-mdbqt\" (UID: \"0470a171-1894-4d83-b3d3-aae6580ef2e1\") " pod="metallb-system/controller-f8648f98b-mdbqt" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.505750 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a39fe675-ad51-4758-a2f3-b911b8a9f5fd-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-bsgvl\" (UID: \"a39fe675-ad51-4758-a2f3-b911b8a9f5fd\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.505897 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-metrics-certs\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.509125 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a39fe675-ad51-4758-a2f3-b911b8a9f5fd-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-bsgvl\" (UID: \"a39fe675-ad51-4758-a2f3-b911b8a9f5fd\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.509798 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f4f8e75-d158-487b-872b-4cfa2cb0b98b-metrics-certs\") pod \"frr-k8s-k6bpg\" (UID: \"7f4f8e75-d158-487b-872b-4cfa2cb0b98b\") " pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.607791 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-memberlist\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.607877 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0470a171-1894-4d83-b3d3-aae6580ef2e1-metrics-certs\") pod \"controller-f8648f98b-mdbqt\" (UID: \"0470a171-1894-4d83-b3d3-aae6580ef2e1\") " pod="metallb-system/controller-f8648f98b-mdbqt" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.607961 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-metrics-certs\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:19 crc kubenswrapper[4849]: E1209 11:41:19.608020 4849 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 09 11:41:19 crc kubenswrapper[4849]: E1209 11:41:19.608115 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-memberlist podName:c3a2373d-8193-43ee-b1de-003115ad48f6 nodeName:}" failed. No retries permitted until 2025-12-09 11:41:20.608090957 +0000 UTC m=+863.147975293 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-memberlist") pod "speaker-lxwrr" (UID: "c3a2373d-8193-43ee-b1de-003115ad48f6") : secret "metallb-memberlist" not found Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.611801 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0470a171-1894-4d83-b3d3-aae6580ef2e1-metrics-certs\") pod \"controller-f8648f98b-mdbqt\" (UID: \"0470a171-1894-4d83-b3d3-aae6580ef2e1\") " pod="metallb-system/controller-f8648f98b-mdbqt" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.612038 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-metrics-certs\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.700855 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.719729 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.830404 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-mdbqt" Dec 09 11:41:19 crc kubenswrapper[4849]: I1209 11:41:19.980977 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl"] Dec 09 11:41:19 crc kubenswrapper[4849]: W1209 11:41:19.988151 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda39fe675_ad51_4758_a2f3_b911b8a9f5fd.slice/crio-8487a27e2ba0fb814fb1498a44684df9d1e36704a79ef37433c983c8dd7f42d5 WatchSource:0}: Error finding container 8487a27e2ba0fb814fb1498a44684df9d1e36704a79ef37433c983c8dd7f42d5: Status 404 returned error can't find the container with id 8487a27e2ba0fb814fb1498a44684df9d1e36704a79ef37433c983c8dd7f42d5 Dec 09 11:41:20 crc kubenswrapper[4849]: I1209 11:41:20.274190 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-mdbqt"] Dec 09 11:41:20 crc kubenswrapper[4849]: W1209 11:41:20.276620 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0470a171_1894_4d83_b3d3_aae6580ef2e1.slice/crio-b8adfbe353ebeb0f1e721159ebab510abdde379f2cf78170a1ad38150d62084a WatchSource:0}: Error finding container b8adfbe353ebeb0f1e721159ebab510abdde379f2cf78170a1ad38150d62084a: Status 404 returned error can't find the container with id b8adfbe353ebeb0f1e721159ebab510abdde379f2cf78170a1ad38150d62084a Dec 09 11:41:20 crc kubenswrapper[4849]: I1209 11:41:20.634911 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-memberlist\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:20 crc kubenswrapper[4849]: E1209 11:41:20.634994 4849 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 09 11:41:20 crc kubenswrapper[4849]: E1209 11:41:20.635110 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-memberlist podName:c3a2373d-8193-43ee-b1de-003115ad48f6 nodeName:}" failed. No retries permitted until 2025-12-09 11:41:22.635095865 +0000 UTC m=+865.174980181 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-memberlist") pod "speaker-lxwrr" (UID: "c3a2373d-8193-43ee-b1de-003115ad48f6") : secret "metallb-memberlist" not found Dec 09 11:41:20 crc kubenswrapper[4849]: I1209 11:41:20.702122 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl" event={"ID":"a39fe675-ad51-4758-a2f3-b911b8a9f5fd","Type":"ContainerStarted","Data":"8487a27e2ba0fb814fb1498a44684df9d1e36704a79ef37433c983c8dd7f42d5"} Dec 09 11:41:20 crc kubenswrapper[4849]: I1209 11:41:20.703360 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-mdbqt" event={"ID":"0470a171-1894-4d83-b3d3-aae6580ef2e1","Type":"ContainerStarted","Data":"b8adfbe353ebeb0f1e721159ebab510abdde379f2cf78170a1ad38150d62084a"} Dec 09 11:41:21 crc kubenswrapper[4849]: I1209 11:41:21.724869 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k6bpg" event={"ID":"7f4f8e75-d158-487b-872b-4cfa2cb0b98b","Type":"ContainerStarted","Data":"f75140bf54850c17c9ad6f90e33afd6491aaa659d35fe4f667a40884151c652a"} Dec 09 11:41:21 crc kubenswrapper[4849]: I1209 11:41:21.730015 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-mdbqt" event={"ID":"0470a171-1894-4d83-b3d3-aae6580ef2e1","Type":"ContainerStarted","Data":"136ce2d7af48c439c2a7a5113fd25e04c98035e4b78396710ed0307714c23c91"} Dec 09 11:41:21 crc kubenswrapper[4849]: I1209 11:41:21.730056 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-mdbqt" event={"ID":"0470a171-1894-4d83-b3d3-aae6580ef2e1","Type":"ContainerStarted","Data":"90e20c90f4ac2c9dea6a0389a0e60f149ad6bb0232d02b52c2a50c590c136913"} Dec 09 11:41:21 crc kubenswrapper[4849]: I1209 11:41:21.730189 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-mdbqt" Dec 09 11:41:22 crc kubenswrapper[4849]: I1209 11:41:22.686570 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-memberlist\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:22 crc kubenswrapper[4849]: I1209 11:41:22.698243 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3a2373d-8193-43ee-b1de-003115ad48f6-memberlist\") pod \"speaker-lxwrr\" (UID: \"c3a2373d-8193-43ee-b1de-003115ad48f6\") " pod="metallb-system/speaker-lxwrr" Dec 09 11:41:22 crc kubenswrapper[4849]: I1209 11:41:22.821115 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lxwrr" Dec 09 11:41:23 crc kubenswrapper[4849]: I1209 11:41:23.747108 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lxwrr" event={"ID":"c3a2373d-8193-43ee-b1de-003115ad48f6","Type":"ContainerStarted","Data":"c3c2ee04441fe27df826373cbd593fa0432fb2c57d0233b3baa327c748094d4c"} Dec 09 11:41:23 crc kubenswrapper[4849]: I1209 11:41:23.747569 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lxwrr" event={"ID":"c3a2373d-8193-43ee-b1de-003115ad48f6","Type":"ContainerStarted","Data":"beb29a179142ef83c99127fc542ab9ca9d06426f9119cb2009c9517089f6a8e3"} Dec 09 11:41:23 crc kubenswrapper[4849]: I1209 11:41:23.747586 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lxwrr" event={"ID":"c3a2373d-8193-43ee-b1de-003115ad48f6","Type":"ContainerStarted","Data":"a8a391c95b0783b6986cec14be5e52b1c628b90e72e1438b1afed6741a8451e0"} Dec 09 11:41:23 crc kubenswrapper[4849]: I1209 11:41:23.747832 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lxwrr" Dec 09 11:41:23 crc kubenswrapper[4849]: I1209 11:41:23.768601 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-mdbqt" podStartSLOduration=5.76858154 podStartE2EDuration="5.76858154s" podCreationTimestamp="2025-12-09 11:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:41:21.756641316 +0000 UTC m=+864.296525652" watchObservedRunningTime="2025-12-09 11:41:23.76858154 +0000 UTC m=+866.308465856" Dec 09 11:41:23 crc kubenswrapper[4849]: I1209 11:41:23.769981 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-lxwrr" podStartSLOduration=5.769973015 podStartE2EDuration="5.769973015s" podCreationTimestamp="2025-12-09 11:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:41:23.763063 +0000 UTC m=+866.302947326" watchObservedRunningTime="2025-12-09 11:41:23.769973015 +0000 UTC m=+866.309857341" Dec 09 11:41:31 crc kubenswrapper[4849]: I1209 11:41:31.871916 4849 generic.go:334] "Generic (PLEG): container finished" podID="7f4f8e75-d158-487b-872b-4cfa2cb0b98b" containerID="31a87565315b9b5f2135f71d2e1c43995e5b29b48d152149d6cae5a7ea34fb5d" exitCode=0 Dec 09 11:41:31 crc kubenswrapper[4849]: I1209 11:41:31.872019 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k6bpg" event={"ID":"7f4f8e75-d158-487b-872b-4cfa2cb0b98b","Type":"ContainerDied","Data":"31a87565315b9b5f2135f71d2e1c43995e5b29b48d152149d6cae5a7ea34fb5d"} Dec 09 11:41:31 crc kubenswrapper[4849]: I1209 11:41:31.875529 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl" event={"ID":"a39fe675-ad51-4758-a2f3-b911b8a9f5fd","Type":"ContainerStarted","Data":"76cdcaeb39a282397b0cffcf55f61493e667249d0e2e9a38d1e351e00f83fbef"} Dec 09 11:41:31 crc kubenswrapper[4849]: I1209 11:41:31.875670 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl" Dec 09 11:41:31 crc kubenswrapper[4849]: I1209 11:41:31.927308 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl" podStartSLOduration=2.450432358 podStartE2EDuration="13.927290138s" podCreationTimestamp="2025-12-09 11:41:18 +0000 UTC" firstStartedPulling="2025-12-09 11:41:19.990889242 +0000 UTC m=+862.530773558" lastFinishedPulling="2025-12-09 11:41:31.467747022 +0000 UTC m=+874.007631338" observedRunningTime="2025-12-09 11:41:31.914829901 +0000 UTC m=+874.454714217" watchObservedRunningTime="2025-12-09 11:41:31.927290138 +0000 UTC m=+874.467174444" Dec 09 11:41:32 crc kubenswrapper[4849]: I1209 11:41:32.882923 4849 generic.go:334] "Generic (PLEG): container finished" podID="7f4f8e75-d158-487b-872b-4cfa2cb0b98b" containerID="fc2eab83c77ef31659c0334e222c116491cfc3bdf7dd4dc18d3ac291e6369dd8" exitCode=0 Dec 09 11:41:32 crc kubenswrapper[4849]: I1209 11:41:32.883030 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k6bpg" event={"ID":"7f4f8e75-d158-487b-872b-4cfa2cb0b98b","Type":"ContainerDied","Data":"fc2eab83c77ef31659c0334e222c116491cfc3bdf7dd4dc18d3ac291e6369dd8"} Dec 09 11:41:33 crc kubenswrapper[4849]: I1209 11:41:33.891843 4849 generic.go:334] "Generic (PLEG): container finished" podID="7f4f8e75-d158-487b-872b-4cfa2cb0b98b" containerID="d1969a05b35e5b0e6cd5fac3599d239b04342c19c63846e39cb713c4a20170a7" exitCode=0 Dec 09 11:41:33 crc kubenswrapper[4849]: I1209 11:41:33.891916 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k6bpg" event={"ID":"7f4f8e75-d158-487b-872b-4cfa2cb0b98b","Type":"ContainerDied","Data":"d1969a05b35e5b0e6cd5fac3599d239b04342c19c63846e39cb713c4a20170a7"} Dec 09 11:41:34 crc kubenswrapper[4849]: I1209 11:41:34.903752 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k6bpg" event={"ID":"7f4f8e75-d158-487b-872b-4cfa2cb0b98b","Type":"ContainerStarted","Data":"16f39f9315e2ef2a4dff7b47c4cc12c90fceffe376be5a02f90a7b2665a1d45f"} Dec 09 11:41:34 crc kubenswrapper[4849]: I1209 11:41:34.904080 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k6bpg" event={"ID":"7f4f8e75-d158-487b-872b-4cfa2cb0b98b","Type":"ContainerStarted","Data":"01fdf673b8e6710761e2ea593a84262f07a17f8f91704fb1024cf80f3a68451b"} Dec 09 11:41:34 crc kubenswrapper[4849]: I1209 11:41:34.904095 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k6bpg" event={"ID":"7f4f8e75-d158-487b-872b-4cfa2cb0b98b","Type":"ContainerStarted","Data":"9b2d95b749743ce6bdfd003fcdca56329470caaef0428638165f687a70e670a3"} Dec 09 11:41:34 crc kubenswrapper[4849]: I1209 11:41:34.904104 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k6bpg" event={"ID":"7f4f8e75-d158-487b-872b-4cfa2cb0b98b","Type":"ContainerStarted","Data":"e0b92806fa5d15c2c6e30341b515caab529269f65c47a5fe9358aa655bcf03d7"} Dec 09 11:41:34 crc kubenswrapper[4849]: I1209 11:41:34.904115 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k6bpg" event={"ID":"7f4f8e75-d158-487b-872b-4cfa2cb0b98b","Type":"ContainerStarted","Data":"a119097b2f40a6921bbdfceaf026fbc0b06f218ecf4cbe32e1846540ea587fde"} Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.032961 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lq5dk"] Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.034778 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.048838 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lq5dk"] Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.229177 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c090d47b-47e9-4707-b5e4-f165b88def1a-utilities\") pod \"redhat-marketplace-lq5dk\" (UID: \"c090d47b-47e9-4707-b5e4-f165b88def1a\") " pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.229217 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lntvg\" (UniqueName: \"kubernetes.io/projected/c090d47b-47e9-4707-b5e4-f165b88def1a-kube-api-access-lntvg\") pod \"redhat-marketplace-lq5dk\" (UID: \"c090d47b-47e9-4707-b5e4-f165b88def1a\") " pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.229311 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c090d47b-47e9-4707-b5e4-f165b88def1a-catalog-content\") pod \"redhat-marketplace-lq5dk\" (UID: \"c090d47b-47e9-4707-b5e4-f165b88def1a\") " pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.330910 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c090d47b-47e9-4707-b5e4-f165b88def1a-catalog-content\") pod \"redhat-marketplace-lq5dk\" (UID: \"c090d47b-47e9-4707-b5e4-f165b88def1a\") " pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.331013 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c090d47b-47e9-4707-b5e4-f165b88def1a-utilities\") pod \"redhat-marketplace-lq5dk\" (UID: \"c090d47b-47e9-4707-b5e4-f165b88def1a\") " pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.331050 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lntvg\" (UniqueName: \"kubernetes.io/projected/c090d47b-47e9-4707-b5e4-f165b88def1a-kube-api-access-lntvg\") pod \"redhat-marketplace-lq5dk\" (UID: \"c090d47b-47e9-4707-b5e4-f165b88def1a\") " pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.332306 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c090d47b-47e9-4707-b5e4-f165b88def1a-catalog-content\") pod \"redhat-marketplace-lq5dk\" (UID: \"c090d47b-47e9-4707-b5e4-f165b88def1a\") " pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.332477 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c090d47b-47e9-4707-b5e4-f165b88def1a-utilities\") pod \"redhat-marketplace-lq5dk\" (UID: \"c090d47b-47e9-4707-b5e4-f165b88def1a\") " pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.356211 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lntvg\" (UniqueName: \"kubernetes.io/projected/c090d47b-47e9-4707-b5e4-f165b88def1a-kube-api-access-lntvg\") pod \"redhat-marketplace-lq5dk\" (UID: \"c090d47b-47e9-4707-b5e4-f165b88def1a\") " pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.654592 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.922094 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k6bpg" event={"ID":"7f4f8e75-d158-487b-872b-4cfa2cb0b98b","Type":"ContainerStarted","Data":"6d128df77e0ef7d742c98c667ea87c3e699149d1c8551ab6b7525937189c3e3b"} Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.924384 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.955356 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-k6bpg" podStartSLOduration=7.829301034 podStartE2EDuration="17.955338403s" podCreationTimestamp="2025-12-09 11:41:18 +0000 UTC" firstStartedPulling="2025-12-09 11:41:21.357148436 +0000 UTC m=+863.897032752" lastFinishedPulling="2025-12-09 11:41:31.483185805 +0000 UTC m=+874.023070121" observedRunningTime="2025-12-09 11:41:35.949692559 +0000 UTC m=+878.489576895" watchObservedRunningTime="2025-12-09 11:41:35.955338403 +0000 UTC m=+878.495222709" Dec 09 11:41:35 crc kubenswrapper[4849]: I1209 11:41:35.966097 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lq5dk"] Dec 09 11:41:35 crc kubenswrapper[4849]: W1209 11:41:35.970016 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc090d47b_47e9_4707_b5e4_f165b88def1a.slice/crio-61e0afe18bcfc4cfba6a636e2e359ed032655e16e6030dc29873bc2cedf0e052 WatchSource:0}: Error finding container 61e0afe18bcfc4cfba6a636e2e359ed032655e16e6030dc29873bc2cedf0e052: Status 404 returned error can't find the container with id 61e0afe18bcfc4cfba6a636e2e359ed032655e16e6030dc29873bc2cedf0e052 Dec 09 11:41:36 crc kubenswrapper[4849]: I1209 11:41:36.928669 4849 generic.go:334] "Generic (PLEG): container finished" podID="c090d47b-47e9-4707-b5e4-f165b88def1a" containerID="e890ca76ae0ad9957779d39277988f51ffa4ef3f106cdfbac96946f08728babb" exitCode=0 Dec 09 11:41:36 crc kubenswrapper[4849]: I1209 11:41:36.928756 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq5dk" event={"ID":"c090d47b-47e9-4707-b5e4-f165b88def1a","Type":"ContainerDied","Data":"e890ca76ae0ad9957779d39277988f51ffa4ef3f106cdfbac96946f08728babb"} Dec 09 11:41:36 crc kubenswrapper[4849]: I1209 11:41:36.928796 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq5dk" event={"ID":"c090d47b-47e9-4707-b5e4-f165b88def1a","Type":"ContainerStarted","Data":"61e0afe18bcfc4cfba6a636e2e359ed032655e16e6030dc29873bc2cedf0e052"} Dec 09 11:41:37 crc kubenswrapper[4849]: I1209 11:41:37.937299 4849 generic.go:334] "Generic (PLEG): container finished" podID="c090d47b-47e9-4707-b5e4-f165b88def1a" containerID="baf269cf013f214249de03462443053c38db15738327da4a4dbd5c6b87042163" exitCode=0 Dec 09 11:41:37 crc kubenswrapper[4849]: I1209 11:41:37.937472 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq5dk" event={"ID":"c090d47b-47e9-4707-b5e4-f165b88def1a","Type":"ContainerDied","Data":"baf269cf013f214249de03462443053c38db15738327da4a4dbd5c6b87042163"} Dec 09 11:41:38 crc kubenswrapper[4849]: I1209 11:41:38.946322 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq5dk" event={"ID":"c090d47b-47e9-4707-b5e4-f165b88def1a","Type":"ContainerStarted","Data":"6bf398a30d456ac3f25d9e0fa596856f5abcf28ce5a075c4b6f26536c849ec82"} Dec 09 11:41:38 crc kubenswrapper[4849]: I1209 11:41:38.981000 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lq5dk" podStartSLOduration=2.5654591890000003 podStartE2EDuration="3.980986426s" podCreationTimestamp="2025-12-09 11:41:35 +0000 UTC" firstStartedPulling="2025-12-09 11:41:36.930561013 +0000 UTC m=+879.470445329" lastFinishedPulling="2025-12-09 11:41:38.34608824 +0000 UTC m=+880.885972566" observedRunningTime="2025-12-09 11:41:38.979636372 +0000 UTC m=+881.519520688" watchObservedRunningTime="2025-12-09 11:41:38.980986426 +0000 UTC m=+881.520870742" Dec 09 11:41:40 crc kubenswrapper[4849]: I1209 11:41:40.047797 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:40 crc kubenswrapper[4849]: I1209 11:41:40.056655 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-mdbqt" Dec 09 11:41:40 crc kubenswrapper[4849]: I1209 11:41:40.115966 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:42 crc kubenswrapper[4849]: I1209 11:41:42.825692 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-lxwrr" Dec 09 11:41:45 crc kubenswrapper[4849]: I1209 11:41:45.655455 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:45 crc kubenswrapper[4849]: I1209 11:41:45.656015 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:45 crc kubenswrapper[4849]: I1209 11:41:45.699848 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:46 crc kubenswrapper[4849]: I1209 11:41:46.025962 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:46 crc kubenswrapper[4849]: I1209 11:41:46.067982 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lq5dk"] Dec 09 11:41:47 crc kubenswrapper[4849]: I1209 11:41:47.997138 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lq5dk" podUID="c090d47b-47e9-4707-b5e4-f165b88def1a" containerName="registry-server" containerID="cri-o://6bf398a30d456ac3f25d9e0fa596856f5abcf28ce5a075c4b6f26536c849ec82" gracePeriod=2 Dec 09 11:41:48 crc kubenswrapper[4849]: I1209 11:41:48.903362 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.019751 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c090d47b-47e9-4707-b5e4-f165b88def1a-utilities\") pod \"c090d47b-47e9-4707-b5e4-f165b88def1a\" (UID: \"c090d47b-47e9-4707-b5e4-f165b88def1a\") " Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.019818 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c090d47b-47e9-4707-b5e4-f165b88def1a-catalog-content\") pod \"c090d47b-47e9-4707-b5e4-f165b88def1a\" (UID: \"c090d47b-47e9-4707-b5e4-f165b88def1a\") " Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.019856 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lntvg\" (UniqueName: \"kubernetes.io/projected/c090d47b-47e9-4707-b5e4-f165b88def1a-kube-api-access-lntvg\") pod \"c090d47b-47e9-4707-b5e4-f165b88def1a\" (UID: \"c090d47b-47e9-4707-b5e4-f165b88def1a\") " Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.027130 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c090d47b-47e9-4707-b5e4-f165b88def1a-utilities" (OuterVolumeSpecName: "utilities") pod "c090d47b-47e9-4707-b5e4-f165b88def1a" (UID: "c090d47b-47e9-4707-b5e4-f165b88def1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.038108 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c090d47b-47e9-4707-b5e4-f165b88def1a-kube-api-access-lntvg" (OuterVolumeSpecName: "kube-api-access-lntvg") pod "c090d47b-47e9-4707-b5e4-f165b88def1a" (UID: "c090d47b-47e9-4707-b5e4-f165b88def1a"). InnerVolumeSpecName "kube-api-access-lntvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.049956 4849 generic.go:334] "Generic (PLEG): container finished" podID="c090d47b-47e9-4707-b5e4-f165b88def1a" containerID="6bf398a30d456ac3f25d9e0fa596856f5abcf28ce5a075c4b6f26536c849ec82" exitCode=0 Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.050119 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq5dk" event={"ID":"c090d47b-47e9-4707-b5e4-f165b88def1a","Type":"ContainerDied","Data":"6bf398a30d456ac3f25d9e0fa596856f5abcf28ce5a075c4b6f26536c849ec82"} Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.050191 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq5dk" event={"ID":"c090d47b-47e9-4707-b5e4-f165b88def1a","Type":"ContainerDied","Data":"61e0afe18bcfc4cfba6a636e2e359ed032655e16e6030dc29873bc2cedf0e052"} Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.050254 4849 scope.go:117] "RemoveContainer" containerID="6bf398a30d456ac3f25d9e0fa596856f5abcf28ce5a075c4b6f26536c849ec82" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.050444 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lq5dk" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.052365 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c090d47b-47e9-4707-b5e4-f165b88def1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c090d47b-47e9-4707-b5e4-f165b88def1a" (UID: "c090d47b-47e9-4707-b5e4-f165b88def1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.069839 4849 scope.go:117] "RemoveContainer" containerID="baf269cf013f214249de03462443053c38db15738327da4a4dbd5c6b87042163" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.089038 4849 scope.go:117] "RemoveContainer" containerID="e890ca76ae0ad9957779d39277988f51ffa4ef3f106cdfbac96946f08728babb" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.111804 4849 scope.go:117] "RemoveContainer" containerID="6bf398a30d456ac3f25d9e0fa596856f5abcf28ce5a075c4b6f26536c849ec82" Dec 09 11:41:49 crc kubenswrapper[4849]: E1209 11:41:49.112365 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bf398a30d456ac3f25d9e0fa596856f5abcf28ce5a075c4b6f26536c849ec82\": container with ID starting with 6bf398a30d456ac3f25d9e0fa596856f5abcf28ce5a075c4b6f26536c849ec82 not found: ID does not exist" containerID="6bf398a30d456ac3f25d9e0fa596856f5abcf28ce5a075c4b6f26536c849ec82" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.112560 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bf398a30d456ac3f25d9e0fa596856f5abcf28ce5a075c4b6f26536c849ec82"} err="failed to get container status \"6bf398a30d456ac3f25d9e0fa596856f5abcf28ce5a075c4b6f26536c849ec82\": rpc error: code = NotFound desc = could not find container \"6bf398a30d456ac3f25d9e0fa596856f5abcf28ce5a075c4b6f26536c849ec82\": container with ID starting with 6bf398a30d456ac3f25d9e0fa596856f5abcf28ce5a075c4b6f26536c849ec82 not found: ID does not exist" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.112676 4849 scope.go:117] "RemoveContainer" containerID="baf269cf013f214249de03462443053c38db15738327da4a4dbd5c6b87042163" Dec 09 11:41:49 crc kubenswrapper[4849]: E1209 11:41:49.113050 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baf269cf013f214249de03462443053c38db15738327da4a4dbd5c6b87042163\": container with ID starting with baf269cf013f214249de03462443053c38db15738327da4a4dbd5c6b87042163 not found: ID does not exist" containerID="baf269cf013f214249de03462443053c38db15738327da4a4dbd5c6b87042163" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.113087 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baf269cf013f214249de03462443053c38db15738327da4a4dbd5c6b87042163"} err="failed to get container status \"baf269cf013f214249de03462443053c38db15738327da4a4dbd5c6b87042163\": rpc error: code = NotFound desc = could not find container \"baf269cf013f214249de03462443053c38db15738327da4a4dbd5c6b87042163\": container with ID starting with baf269cf013f214249de03462443053c38db15738327da4a4dbd5c6b87042163 not found: ID does not exist" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.113112 4849 scope.go:117] "RemoveContainer" containerID="e890ca76ae0ad9957779d39277988f51ffa4ef3f106cdfbac96946f08728babb" Dec 09 11:41:49 crc kubenswrapper[4849]: E1209 11:41:49.113530 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e890ca76ae0ad9957779d39277988f51ffa4ef3f106cdfbac96946f08728babb\": container with ID starting with e890ca76ae0ad9957779d39277988f51ffa4ef3f106cdfbac96946f08728babb not found: ID does not exist" containerID="e890ca76ae0ad9957779d39277988f51ffa4ef3f106cdfbac96946f08728babb" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.113740 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e890ca76ae0ad9957779d39277988f51ffa4ef3f106cdfbac96946f08728babb"} err="failed to get container status \"e890ca76ae0ad9957779d39277988f51ffa4ef3f106cdfbac96946f08728babb\": rpc error: code = NotFound desc = could not find container \"e890ca76ae0ad9957779d39277988f51ffa4ef3f106cdfbac96946f08728babb\": container with ID starting with e890ca76ae0ad9957779d39277988f51ffa4ef3f106cdfbac96946f08728babb not found: ID does not exist" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.121474 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c090d47b-47e9-4707-b5e4-f165b88def1a-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.121517 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c090d47b-47e9-4707-b5e4-f165b88def1a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.121529 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lntvg\" (UniqueName: \"kubernetes.io/projected/c090d47b-47e9-4707-b5e4-f165b88def1a-kube-api-access-lntvg\") on node \"crc\" DevicePath \"\"" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.380987 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lq5dk"] Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.384906 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lq5dk"] Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.703916 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-k6bpg" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.725669 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bsgvl" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.755399 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-r8xbb"] Dec 09 11:41:49 crc kubenswrapper[4849]: E1209 11:41:49.755674 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c090d47b-47e9-4707-b5e4-f165b88def1a" containerName="extract-utilities" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.755692 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="c090d47b-47e9-4707-b5e4-f165b88def1a" containerName="extract-utilities" Dec 09 11:41:49 crc kubenswrapper[4849]: E1209 11:41:49.755706 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c090d47b-47e9-4707-b5e4-f165b88def1a" containerName="extract-content" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.755715 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="c090d47b-47e9-4707-b5e4-f165b88def1a" containerName="extract-content" Dec 09 11:41:49 crc kubenswrapper[4849]: E1209 11:41:49.755729 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c090d47b-47e9-4707-b5e4-f165b88def1a" containerName="registry-server" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.755737 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="c090d47b-47e9-4707-b5e4-f165b88def1a" containerName="registry-server" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.755866 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="c090d47b-47e9-4707-b5e4-f165b88def1a" containerName="registry-server" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.756364 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r8xbb" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.762604 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-j67tk" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.762788 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.762803 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.766557 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r8xbb"] Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.831939 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78rvf\" (UniqueName: \"kubernetes.io/projected/bb370d9b-90b5-41df-91c4-444d3c94b9cb-kube-api-access-78rvf\") pod \"openstack-operator-index-r8xbb\" (UID: \"bb370d9b-90b5-41df-91c4-444d3c94b9cb\") " pod="openstack-operators/openstack-operator-index-r8xbb" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.933446 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78rvf\" (UniqueName: \"kubernetes.io/projected/bb370d9b-90b5-41df-91c4-444d3c94b9cb-kube-api-access-78rvf\") pod \"openstack-operator-index-r8xbb\" (UID: \"bb370d9b-90b5-41df-91c4-444d3c94b9cb\") " pod="openstack-operators/openstack-operator-index-r8xbb" Dec 09 11:41:49 crc kubenswrapper[4849]: I1209 11:41:49.954782 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78rvf\" (UniqueName: \"kubernetes.io/projected/bb370d9b-90b5-41df-91c4-444d3c94b9cb-kube-api-access-78rvf\") pod \"openstack-operator-index-r8xbb\" (UID: \"bb370d9b-90b5-41df-91c4-444d3c94b9cb\") " pod="openstack-operators/openstack-operator-index-r8xbb" Dec 09 11:41:50 crc kubenswrapper[4849]: I1209 11:41:50.075199 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r8xbb" Dec 09 11:41:50 crc kubenswrapper[4849]: I1209 11:41:50.265803 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r8xbb"] Dec 09 11:41:50 crc kubenswrapper[4849]: I1209 11:41:50.556922 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c090d47b-47e9-4707-b5e4-f165b88def1a" path="/var/lib/kubelet/pods/c090d47b-47e9-4707-b5e4-f165b88def1a/volumes" Dec 09 11:41:51 crc kubenswrapper[4849]: I1209 11:41:51.073841 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r8xbb" event={"ID":"bb370d9b-90b5-41df-91c4-444d3c94b9cb","Type":"ContainerStarted","Data":"ef1d60089134e533e75a1b31f8b00585292e82e3db7f0c8a67ab2147becfdbac"} Dec 09 11:41:51 crc kubenswrapper[4849]: I1209 11:41:51.133273 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:41:51 crc kubenswrapper[4849]: I1209 11:41:51.133338 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:41:54 crc kubenswrapper[4849]: I1209 11:41:54.531317 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-r8xbb"] Dec 09 11:41:54 crc kubenswrapper[4849]: I1209 11:41:54.942209 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-r4kw2"] Dec 09 11:41:54 crc kubenswrapper[4849]: I1209 11:41:54.943729 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4kw2" Dec 09 11:41:54 crc kubenswrapper[4849]: I1209 11:41:54.959682 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r4kw2"] Dec 09 11:41:55 crc kubenswrapper[4849]: I1209 11:41:55.020684 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlrjf\" (UniqueName: \"kubernetes.io/projected/5f5f2ad5-e7ac-4940-8c46-bd32cb571127-kube-api-access-dlrjf\") pod \"openstack-operator-index-r4kw2\" (UID: \"5f5f2ad5-e7ac-4940-8c46-bd32cb571127\") " pod="openstack-operators/openstack-operator-index-r4kw2" Dec 09 11:41:55 crc kubenswrapper[4849]: I1209 11:41:55.121942 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlrjf\" (UniqueName: \"kubernetes.io/projected/5f5f2ad5-e7ac-4940-8c46-bd32cb571127-kube-api-access-dlrjf\") pod \"openstack-operator-index-r4kw2\" (UID: \"5f5f2ad5-e7ac-4940-8c46-bd32cb571127\") " pod="openstack-operators/openstack-operator-index-r4kw2" Dec 09 11:41:55 crc kubenswrapper[4849]: I1209 11:41:55.142684 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlrjf\" (UniqueName: \"kubernetes.io/projected/5f5f2ad5-e7ac-4940-8c46-bd32cb571127-kube-api-access-dlrjf\") pod \"openstack-operator-index-r4kw2\" (UID: \"5f5f2ad5-e7ac-4940-8c46-bd32cb571127\") " pod="openstack-operators/openstack-operator-index-r4kw2" Dec 09 11:41:55 crc kubenswrapper[4849]: I1209 11:41:55.269744 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4kw2" Dec 09 11:41:56 crc kubenswrapper[4849]: I1209 11:41:56.310052 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r4kw2"] Dec 09 11:41:57 crc kubenswrapper[4849]: I1209 11:41:57.116087 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4kw2" event={"ID":"5f5f2ad5-e7ac-4940-8c46-bd32cb571127","Type":"ContainerStarted","Data":"610a27cccc0e16644e1704adef02198554fba93688d876382137edb768e66212"} Dec 09 11:41:58 crc kubenswrapper[4849]: I1209 11:41:58.123391 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4kw2" event={"ID":"5f5f2ad5-e7ac-4940-8c46-bd32cb571127","Type":"ContainerStarted","Data":"828b41de1c5f74a7714472df662026d379710b79d11b7afe8e2e7ee1ae4e58ab"} Dec 09 11:41:58 crc kubenswrapper[4849]: I1209 11:41:58.124856 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r8xbb" event={"ID":"bb370d9b-90b5-41df-91c4-444d3c94b9cb","Type":"ContainerStarted","Data":"6877754a8894a364944dad06a981fd2e1f5a51a54762cb7ab4ad26ea607d892a"} Dec 09 11:41:58 crc kubenswrapper[4849]: I1209 11:41:58.124992 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-r8xbb" podUID="bb370d9b-90b5-41df-91c4-444d3c94b9cb" containerName="registry-server" containerID="cri-o://6877754a8894a364944dad06a981fd2e1f5a51a54762cb7ab4ad26ea607d892a" gracePeriod=2 Dec 09 11:41:58 crc kubenswrapper[4849]: I1209 11:41:58.138589 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-r4kw2" podStartSLOduration=3.926502646 podStartE2EDuration="4.138572229s" podCreationTimestamp="2025-12-09 11:41:54 +0000 UTC" firstStartedPulling="2025-12-09 11:41:57.049943895 +0000 UTC m=+899.589828211" lastFinishedPulling="2025-12-09 11:41:57.262013478 +0000 UTC m=+899.801897794" observedRunningTime="2025-12-09 11:41:58.136259281 +0000 UTC m=+900.676143607" watchObservedRunningTime="2025-12-09 11:41:58.138572229 +0000 UTC m=+900.678456555" Dec 09 11:41:58 crc kubenswrapper[4849]: I1209 11:41:58.158852 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-r8xbb" podStartSLOduration=2.151580997 podStartE2EDuration="9.158828225s" podCreationTimestamp="2025-12-09 11:41:49 +0000 UTC" firstStartedPulling="2025-12-09 11:41:50.279313745 +0000 UTC m=+892.819198061" lastFinishedPulling="2025-12-09 11:41:57.286560983 +0000 UTC m=+899.826445289" observedRunningTime="2025-12-09 11:41:58.152578555 +0000 UTC m=+900.692462871" watchObservedRunningTime="2025-12-09 11:41:58.158828225 +0000 UTC m=+900.698712541" Dec 09 11:41:58 crc kubenswrapper[4849]: I1209 11:41:58.800259 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r8xbb" Dec 09 11:41:58 crc kubenswrapper[4849]: I1209 11:41:58.905185 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78rvf\" (UniqueName: \"kubernetes.io/projected/bb370d9b-90b5-41df-91c4-444d3c94b9cb-kube-api-access-78rvf\") pod \"bb370d9b-90b5-41df-91c4-444d3c94b9cb\" (UID: \"bb370d9b-90b5-41df-91c4-444d3c94b9cb\") " Dec 09 11:41:58 crc kubenswrapper[4849]: I1209 11:41:58.911313 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb370d9b-90b5-41df-91c4-444d3c94b9cb-kube-api-access-78rvf" (OuterVolumeSpecName: "kube-api-access-78rvf") pod "bb370d9b-90b5-41df-91c4-444d3c94b9cb" (UID: "bb370d9b-90b5-41df-91c4-444d3c94b9cb"). InnerVolumeSpecName "kube-api-access-78rvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:41:59 crc kubenswrapper[4849]: I1209 11:41:59.007372 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78rvf\" (UniqueName: \"kubernetes.io/projected/bb370d9b-90b5-41df-91c4-444d3c94b9cb-kube-api-access-78rvf\") on node \"crc\" DevicePath \"\"" Dec 09 11:41:59 crc kubenswrapper[4849]: I1209 11:41:59.132739 4849 generic.go:334] "Generic (PLEG): container finished" podID="bb370d9b-90b5-41df-91c4-444d3c94b9cb" containerID="6877754a8894a364944dad06a981fd2e1f5a51a54762cb7ab4ad26ea607d892a" exitCode=0 Dec 09 11:41:59 crc kubenswrapper[4849]: I1209 11:41:59.132794 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r8xbb" Dec 09 11:41:59 crc kubenswrapper[4849]: I1209 11:41:59.132796 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r8xbb" event={"ID":"bb370d9b-90b5-41df-91c4-444d3c94b9cb","Type":"ContainerDied","Data":"6877754a8894a364944dad06a981fd2e1f5a51a54762cb7ab4ad26ea607d892a"} Dec 09 11:41:59 crc kubenswrapper[4849]: I1209 11:41:59.133136 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r8xbb" event={"ID":"bb370d9b-90b5-41df-91c4-444d3c94b9cb","Type":"ContainerDied","Data":"ef1d60089134e533e75a1b31f8b00585292e82e3db7f0c8a67ab2147becfdbac"} Dec 09 11:41:59 crc kubenswrapper[4849]: I1209 11:41:59.133165 4849 scope.go:117] "RemoveContainer" containerID="6877754a8894a364944dad06a981fd2e1f5a51a54762cb7ab4ad26ea607d892a" Dec 09 11:41:59 crc kubenswrapper[4849]: I1209 11:41:59.164492 4849 scope.go:117] "RemoveContainer" containerID="6877754a8894a364944dad06a981fd2e1f5a51a54762cb7ab4ad26ea607d892a" Dec 09 11:41:59 crc kubenswrapper[4849]: E1209 11:41:59.165071 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6877754a8894a364944dad06a981fd2e1f5a51a54762cb7ab4ad26ea607d892a\": container with ID starting with 6877754a8894a364944dad06a981fd2e1f5a51a54762cb7ab4ad26ea607d892a not found: ID does not exist" containerID="6877754a8894a364944dad06a981fd2e1f5a51a54762cb7ab4ad26ea607d892a" Dec 09 11:41:59 crc kubenswrapper[4849]: I1209 11:41:59.165113 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6877754a8894a364944dad06a981fd2e1f5a51a54762cb7ab4ad26ea607d892a"} err="failed to get container status \"6877754a8894a364944dad06a981fd2e1f5a51a54762cb7ab4ad26ea607d892a\": rpc error: code = NotFound desc = could not find container \"6877754a8894a364944dad06a981fd2e1f5a51a54762cb7ab4ad26ea607d892a\": container with ID starting with 6877754a8894a364944dad06a981fd2e1f5a51a54762cb7ab4ad26ea607d892a not found: ID does not exist" Dec 09 11:41:59 crc kubenswrapper[4849]: I1209 11:41:59.165970 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-r8xbb"] Dec 09 11:41:59 crc kubenswrapper[4849]: I1209 11:41:59.170498 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-r8xbb"] Dec 09 11:42:00 crc kubenswrapper[4849]: I1209 11:42:00.544473 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb370d9b-90b5-41df-91c4-444d3c94b9cb" path="/var/lib/kubelet/pods/bb370d9b-90b5-41df-91c4-444d3c94b9cb/volumes" Dec 09 11:42:01 crc kubenswrapper[4849]: I1209 11:42:01.741819 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4hvxq"] Dec 09 11:42:01 crc kubenswrapper[4849]: E1209 11:42:01.742859 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb370d9b-90b5-41df-91c4-444d3c94b9cb" containerName="registry-server" Dec 09 11:42:01 crc kubenswrapper[4849]: I1209 11:42:01.742880 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb370d9b-90b5-41df-91c4-444d3c94b9cb" containerName="registry-server" Dec 09 11:42:01 crc kubenswrapper[4849]: I1209 11:42:01.743047 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb370d9b-90b5-41df-91c4-444d3c94b9cb" containerName="registry-server" Dec 09 11:42:01 crc kubenswrapper[4849]: I1209 11:42:01.744844 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:01 crc kubenswrapper[4849]: I1209 11:42:01.759052 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4hvxq"] Dec 09 11:42:01 crc kubenswrapper[4849]: I1209 11:42:01.843404 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-catalog-content\") pod \"certified-operators-4hvxq\" (UID: \"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a\") " pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:01 crc kubenswrapper[4849]: I1209 11:42:01.843589 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-utilities\") pod \"certified-operators-4hvxq\" (UID: \"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a\") " pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:01 crc kubenswrapper[4849]: I1209 11:42:01.843782 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99hg6\" (UniqueName: \"kubernetes.io/projected/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-kube-api-access-99hg6\") pod \"certified-operators-4hvxq\" (UID: \"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a\") " pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:01 crc kubenswrapper[4849]: I1209 11:42:01.944528 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99hg6\" (UniqueName: \"kubernetes.io/projected/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-kube-api-access-99hg6\") pod \"certified-operators-4hvxq\" (UID: \"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a\") " pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:01 crc kubenswrapper[4849]: I1209 11:42:01.944637 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-catalog-content\") pod \"certified-operators-4hvxq\" (UID: \"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a\") " pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:01 crc kubenswrapper[4849]: I1209 11:42:01.944667 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-utilities\") pod \"certified-operators-4hvxq\" (UID: \"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a\") " pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:01 crc kubenswrapper[4849]: I1209 11:42:01.945128 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-catalog-content\") pod \"certified-operators-4hvxq\" (UID: \"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a\") " pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:01 crc kubenswrapper[4849]: I1209 11:42:01.945188 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-utilities\") pod \"certified-operators-4hvxq\" (UID: \"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a\") " pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:01 crc kubenswrapper[4849]: I1209 11:42:01.966964 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99hg6\" (UniqueName: \"kubernetes.io/projected/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-kube-api-access-99hg6\") pod \"certified-operators-4hvxq\" (UID: \"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a\") " pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:02 crc kubenswrapper[4849]: I1209 11:42:02.067355 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:02 crc kubenswrapper[4849]: I1209 11:42:02.522164 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4hvxq"] Dec 09 11:42:03 crc kubenswrapper[4849]: I1209 11:42:03.162239 4849 generic.go:334] "Generic (PLEG): container finished" podID="44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a" containerID="49e6327bf2afec545012f2d7dd7f9107435ff1322a35301dbf7322cb7e5dd390" exitCode=0 Dec 09 11:42:03 crc kubenswrapper[4849]: I1209 11:42:03.162281 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hvxq" event={"ID":"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a","Type":"ContainerDied","Data":"49e6327bf2afec545012f2d7dd7f9107435ff1322a35301dbf7322cb7e5dd390"} Dec 09 11:42:03 crc kubenswrapper[4849]: I1209 11:42:03.162765 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hvxq" event={"ID":"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a","Type":"ContainerStarted","Data":"a73ed77562b6b2745aac8fe4829a7f68961cb25580c3a95846167e2f82b53004"} Dec 09 11:42:04 crc kubenswrapper[4849]: I1209 11:42:04.169384 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hvxq" event={"ID":"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a","Type":"ContainerStarted","Data":"3490220c2c0170f4449f092b0d486d8137d8e4f1921062fd420e51abb3524e33"} Dec 09 11:42:05 crc kubenswrapper[4849]: I1209 11:42:05.270459 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-r4kw2" Dec 09 11:42:05 crc kubenswrapper[4849]: I1209 11:42:05.270514 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-r4kw2" Dec 09 11:42:05 crc kubenswrapper[4849]: I1209 11:42:05.299302 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-r4kw2" Dec 09 11:42:06 crc kubenswrapper[4849]: I1209 11:42:06.215599 4849 generic.go:334] "Generic (PLEG): container finished" podID="44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a" containerID="3490220c2c0170f4449f092b0d486d8137d8e4f1921062fd420e51abb3524e33" exitCode=0 Dec 09 11:42:06 crc kubenswrapper[4849]: I1209 11:42:06.215681 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hvxq" event={"ID":"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a","Type":"ContainerDied","Data":"3490220c2c0170f4449f092b0d486d8137d8e4f1921062fd420e51abb3524e33"} Dec 09 11:42:06 crc kubenswrapper[4849]: I1209 11:42:06.245757 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-r4kw2" Dec 09 11:42:07 crc kubenswrapper[4849]: I1209 11:42:07.224457 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hvxq" event={"ID":"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a","Type":"ContainerStarted","Data":"785d9aea2db44fe25c04d168d0d668a685ee311f3c63d583dace1d03953a6604"} Dec 09 11:42:07 crc kubenswrapper[4849]: I1209 11:42:07.242780 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4hvxq" podStartSLOduration=2.78298226 podStartE2EDuration="6.242763164s" podCreationTimestamp="2025-12-09 11:42:01 +0000 UTC" firstStartedPulling="2025-12-09 11:42:03.164618795 +0000 UTC m=+905.704503161" lastFinishedPulling="2025-12-09 11:42:06.624399749 +0000 UTC m=+909.164284065" observedRunningTime="2025-12-09 11:42:07.241976074 +0000 UTC m=+909.781860400" watchObservedRunningTime="2025-12-09 11:42:07.242763164 +0000 UTC m=+909.782647480" Dec 09 11:42:12 crc kubenswrapper[4849]: I1209 11:42:12.068587 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:12 crc kubenswrapper[4849]: I1209 11:42:12.069112 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:12 crc kubenswrapper[4849]: I1209 11:42:12.117531 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:12 crc kubenswrapper[4849]: I1209 11:42:12.286424 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:12 crc kubenswrapper[4849]: I1209 11:42:12.786064 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc"] Dec 09 11:42:12 crc kubenswrapper[4849]: I1209 11:42:12.787402 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" Dec 09 11:42:12 crc kubenswrapper[4849]: I1209 11:42:12.792440 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-b69mj" Dec 09 11:42:12 crc kubenswrapper[4849]: I1209 11:42:12.802243 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc"] Dec 09 11:42:12 crc kubenswrapper[4849]: I1209 11:42:12.949789 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a596136d-71ff-41b2-afcc-5886048a6af9-util\") pod \"a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc\" (UID: \"a596136d-71ff-41b2-afcc-5886048a6af9\") " pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" Dec 09 11:42:12 crc kubenswrapper[4849]: I1209 11:42:12.949842 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a596136d-71ff-41b2-afcc-5886048a6af9-bundle\") pod \"a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc\" (UID: \"a596136d-71ff-41b2-afcc-5886048a6af9\") " pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" Dec 09 11:42:12 crc kubenswrapper[4849]: I1209 11:42:12.949875 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz8w2\" (UniqueName: \"kubernetes.io/projected/a596136d-71ff-41b2-afcc-5886048a6af9-kube-api-access-tz8w2\") pod \"a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc\" (UID: \"a596136d-71ff-41b2-afcc-5886048a6af9\") " pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" Dec 09 11:42:13 crc kubenswrapper[4849]: I1209 11:42:13.051728 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a596136d-71ff-41b2-afcc-5886048a6af9-util\") pod \"a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc\" (UID: \"a596136d-71ff-41b2-afcc-5886048a6af9\") " pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" Dec 09 11:42:13 crc kubenswrapper[4849]: I1209 11:42:13.051777 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a596136d-71ff-41b2-afcc-5886048a6af9-bundle\") pod \"a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc\" (UID: \"a596136d-71ff-41b2-afcc-5886048a6af9\") " pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" Dec 09 11:42:13 crc kubenswrapper[4849]: I1209 11:42:13.051811 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz8w2\" (UniqueName: \"kubernetes.io/projected/a596136d-71ff-41b2-afcc-5886048a6af9-kube-api-access-tz8w2\") pod \"a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc\" (UID: \"a596136d-71ff-41b2-afcc-5886048a6af9\") " pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" Dec 09 11:42:13 crc kubenswrapper[4849]: I1209 11:42:13.052235 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a596136d-71ff-41b2-afcc-5886048a6af9-util\") pod \"a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc\" (UID: \"a596136d-71ff-41b2-afcc-5886048a6af9\") " pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" Dec 09 11:42:13 crc kubenswrapper[4849]: I1209 11:42:13.052554 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a596136d-71ff-41b2-afcc-5886048a6af9-bundle\") pod \"a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc\" (UID: \"a596136d-71ff-41b2-afcc-5886048a6af9\") " pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" Dec 09 11:42:13 crc kubenswrapper[4849]: I1209 11:42:13.074263 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz8w2\" (UniqueName: \"kubernetes.io/projected/a596136d-71ff-41b2-afcc-5886048a6af9-kube-api-access-tz8w2\") pod \"a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc\" (UID: \"a596136d-71ff-41b2-afcc-5886048a6af9\") " pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" Dec 09 11:42:13 crc kubenswrapper[4849]: I1209 11:42:13.111955 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" Dec 09 11:42:13 crc kubenswrapper[4849]: I1209 11:42:13.572191 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc"] Dec 09 11:42:13 crc kubenswrapper[4849]: I1209 11:42:13.938974 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4hvxq"] Dec 09 11:42:14 crc kubenswrapper[4849]: I1209 11:42:14.299989 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" event={"ID":"a596136d-71ff-41b2-afcc-5886048a6af9","Type":"ContainerStarted","Data":"98e15d2f1097d4208c57769c74c1268db3738465b7921b34905bd9eb827103c9"} Dec 09 11:42:14 crc kubenswrapper[4849]: I1209 11:42:14.301706 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" event={"ID":"a596136d-71ff-41b2-afcc-5886048a6af9","Type":"ContainerStarted","Data":"ea58e107235c877a477fe9d6b73bb8cb3b0794a50114e2e10c028afeb5cc762a"} Dec 09 11:42:14 crc kubenswrapper[4849]: I1209 11:42:14.300077 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4hvxq" podUID="44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a" containerName="registry-server" containerID="cri-o://785d9aea2db44fe25c04d168d0d668a685ee311f3c63d583dace1d03953a6604" gracePeriod=2 Dec 09 11:42:15 crc kubenswrapper[4849]: I1209 11:42:15.310015 4849 generic.go:334] "Generic (PLEG): container finished" podID="44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a" containerID="785d9aea2db44fe25c04d168d0d668a685ee311f3c63d583dace1d03953a6604" exitCode=0 Dec 09 11:42:15 crc kubenswrapper[4849]: I1209 11:42:15.310105 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hvxq" event={"ID":"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a","Type":"ContainerDied","Data":"785d9aea2db44fe25c04d168d0d668a685ee311f3c63d583dace1d03953a6604"} Dec 09 11:42:15 crc kubenswrapper[4849]: I1209 11:42:15.313704 4849 generic.go:334] "Generic (PLEG): container finished" podID="a596136d-71ff-41b2-afcc-5886048a6af9" containerID="98e15d2f1097d4208c57769c74c1268db3738465b7921b34905bd9eb827103c9" exitCode=0 Dec 09 11:42:15 crc kubenswrapper[4849]: I1209 11:42:15.313829 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" event={"ID":"a596136d-71ff-41b2-afcc-5886048a6af9","Type":"ContainerDied","Data":"98e15d2f1097d4208c57769c74c1268db3738465b7921b34905bd9eb827103c9"} Dec 09 11:42:15 crc kubenswrapper[4849]: I1209 11:42:15.983770 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.123188 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-catalog-content\") pod \"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a\" (UID: \"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a\") " Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.123267 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99hg6\" (UniqueName: \"kubernetes.io/projected/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-kube-api-access-99hg6\") pod \"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a\" (UID: \"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a\") " Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.123422 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-utilities\") pod \"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a\" (UID: \"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a\") " Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.124717 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-utilities" (OuterVolumeSpecName: "utilities") pod "44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a" (UID: "44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.133566 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-kube-api-access-99hg6" (OuterVolumeSpecName: "kube-api-access-99hg6") pod "44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a" (UID: "44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a"). InnerVolumeSpecName "kube-api-access-99hg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.172689 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a" (UID: "44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.225106 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.225171 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.225187 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99hg6\" (UniqueName: \"kubernetes.io/projected/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a-kube-api-access-99hg6\") on node \"crc\" DevicePath \"\"" Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.320717 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hvxq" event={"ID":"44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a","Type":"ContainerDied","Data":"a73ed77562b6b2745aac8fe4829a7f68961cb25580c3a95846167e2f82b53004"} Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.320774 4849 scope.go:117] "RemoveContainer" containerID="785d9aea2db44fe25c04d168d0d668a685ee311f3c63d583dace1d03953a6604" Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.320925 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4hvxq" Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.325876 4849 generic.go:334] "Generic (PLEG): container finished" podID="a596136d-71ff-41b2-afcc-5886048a6af9" containerID="4b5a32aee911d3806bc04b7561a779e80d1db2ed118f51796acc5d04eece97af" exitCode=0 Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.325921 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" event={"ID":"a596136d-71ff-41b2-afcc-5886048a6af9","Type":"ContainerDied","Data":"4b5a32aee911d3806bc04b7561a779e80d1db2ed118f51796acc5d04eece97af"} Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.346798 4849 scope.go:117] "RemoveContainer" containerID="3490220c2c0170f4449f092b0d486d8137d8e4f1921062fd420e51abb3524e33" Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.374568 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4hvxq"] Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.379932 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4hvxq"] Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.392120 4849 scope.go:117] "RemoveContainer" containerID="49e6327bf2afec545012f2d7dd7f9107435ff1322a35301dbf7322cb7e5dd390" Dec 09 11:42:16 crc kubenswrapper[4849]: I1209 11:42:16.547142 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a" path="/var/lib/kubelet/pods/44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a/volumes" Dec 09 11:42:17 crc kubenswrapper[4849]: I1209 11:42:17.335162 4849 generic.go:334] "Generic (PLEG): container finished" podID="a596136d-71ff-41b2-afcc-5886048a6af9" containerID="ff34434efcc46269e11ba6dfbf4ccc2998606e7bac6da53f6f7e03ed4761ea35" exitCode=0 Dec 09 11:42:17 crc kubenswrapper[4849]: I1209 11:42:17.335208 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" event={"ID":"a596136d-71ff-41b2-afcc-5886048a6af9","Type":"ContainerDied","Data":"ff34434efcc46269e11ba6dfbf4ccc2998606e7bac6da53f6f7e03ed4761ea35"} Dec 09 11:42:18 crc kubenswrapper[4849]: I1209 11:42:18.565087 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" Dec 09 11:42:18 crc kubenswrapper[4849]: I1209 11:42:18.760267 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a596136d-71ff-41b2-afcc-5886048a6af9-bundle\") pod \"a596136d-71ff-41b2-afcc-5886048a6af9\" (UID: \"a596136d-71ff-41b2-afcc-5886048a6af9\") " Dec 09 11:42:18 crc kubenswrapper[4849]: I1209 11:42:18.760346 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a596136d-71ff-41b2-afcc-5886048a6af9-util\") pod \"a596136d-71ff-41b2-afcc-5886048a6af9\" (UID: \"a596136d-71ff-41b2-afcc-5886048a6af9\") " Dec 09 11:42:18 crc kubenswrapper[4849]: I1209 11:42:18.760463 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz8w2\" (UniqueName: \"kubernetes.io/projected/a596136d-71ff-41b2-afcc-5886048a6af9-kube-api-access-tz8w2\") pod \"a596136d-71ff-41b2-afcc-5886048a6af9\" (UID: \"a596136d-71ff-41b2-afcc-5886048a6af9\") " Dec 09 11:42:18 crc kubenswrapper[4849]: I1209 11:42:18.761049 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a596136d-71ff-41b2-afcc-5886048a6af9-bundle" (OuterVolumeSpecName: "bundle") pod "a596136d-71ff-41b2-afcc-5886048a6af9" (UID: "a596136d-71ff-41b2-afcc-5886048a6af9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:42:18 crc kubenswrapper[4849]: I1209 11:42:18.768595 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a596136d-71ff-41b2-afcc-5886048a6af9-kube-api-access-tz8w2" (OuterVolumeSpecName: "kube-api-access-tz8w2") pod "a596136d-71ff-41b2-afcc-5886048a6af9" (UID: "a596136d-71ff-41b2-afcc-5886048a6af9"). InnerVolumeSpecName "kube-api-access-tz8w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:42:18 crc kubenswrapper[4849]: I1209 11:42:18.774252 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a596136d-71ff-41b2-afcc-5886048a6af9-util" (OuterVolumeSpecName: "util") pod "a596136d-71ff-41b2-afcc-5886048a6af9" (UID: "a596136d-71ff-41b2-afcc-5886048a6af9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:42:18 crc kubenswrapper[4849]: I1209 11:42:18.862493 4849 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a596136d-71ff-41b2-afcc-5886048a6af9-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:42:18 crc kubenswrapper[4849]: I1209 11:42:18.862535 4849 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a596136d-71ff-41b2-afcc-5886048a6af9-util\") on node \"crc\" DevicePath \"\"" Dec 09 11:42:18 crc kubenswrapper[4849]: I1209 11:42:18.862547 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz8w2\" (UniqueName: \"kubernetes.io/projected/a596136d-71ff-41b2-afcc-5886048a6af9-kube-api-access-tz8w2\") on node \"crc\" DevicePath \"\"" Dec 09 11:42:19 crc kubenswrapper[4849]: I1209 11:42:19.350738 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" event={"ID":"a596136d-71ff-41b2-afcc-5886048a6af9","Type":"ContainerDied","Data":"ea58e107235c877a477fe9d6b73bb8cb3b0794a50114e2e10c028afeb5cc762a"} Dec 09 11:42:19 crc kubenswrapper[4849]: I1209 11:42:19.351074 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea58e107235c877a477fe9d6b73bb8cb3b0794a50114e2e10c028afeb5cc762a" Dec 09 11:42:19 crc kubenswrapper[4849]: I1209 11:42:19.350925 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.363723 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p6t97"] Dec 09 11:42:20 crc kubenswrapper[4849]: E1209 11:42:20.364161 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a596136d-71ff-41b2-afcc-5886048a6af9" containerName="pull" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.364175 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="a596136d-71ff-41b2-afcc-5886048a6af9" containerName="pull" Dec 09 11:42:20 crc kubenswrapper[4849]: E1209 11:42:20.364189 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a596136d-71ff-41b2-afcc-5886048a6af9" containerName="util" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.364195 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="a596136d-71ff-41b2-afcc-5886048a6af9" containerName="util" Dec 09 11:42:20 crc kubenswrapper[4849]: E1209 11:42:20.364206 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a" containerName="registry-server" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.364213 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a" containerName="registry-server" Dec 09 11:42:20 crc kubenswrapper[4849]: E1209 11:42:20.364232 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a" containerName="extract-content" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.364239 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a" containerName="extract-content" Dec 09 11:42:20 crc kubenswrapper[4849]: E1209 11:42:20.364248 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a596136d-71ff-41b2-afcc-5886048a6af9" containerName="extract" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.364254 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="a596136d-71ff-41b2-afcc-5886048a6af9" containerName="extract" Dec 09 11:42:20 crc kubenswrapper[4849]: E1209 11:42:20.364267 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a" containerName="extract-utilities" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.364274 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a" containerName="extract-utilities" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.364401 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="44dcf1f4-0dbc-4a20-97f7-5eacfc6ed53a" containerName="registry-server" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.364434 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="a596136d-71ff-41b2-afcc-5886048a6af9" containerName="extract" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.365433 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.370730 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6t97"] Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.381090 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-utilities\") pod \"community-operators-p6t97\" (UID: \"0df0db64-7578-46b1-94ae-4e93f9b6e3cf\") " pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.381175 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m9kw\" (UniqueName: \"kubernetes.io/projected/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-kube-api-access-8m9kw\") pod \"community-operators-p6t97\" (UID: \"0df0db64-7578-46b1-94ae-4e93f9b6e3cf\") " pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.381205 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-catalog-content\") pod \"community-operators-p6t97\" (UID: \"0df0db64-7578-46b1-94ae-4e93f9b6e3cf\") " pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.482904 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-utilities\") pod \"community-operators-p6t97\" (UID: \"0df0db64-7578-46b1-94ae-4e93f9b6e3cf\") " pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.482957 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m9kw\" (UniqueName: \"kubernetes.io/projected/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-kube-api-access-8m9kw\") pod \"community-operators-p6t97\" (UID: \"0df0db64-7578-46b1-94ae-4e93f9b6e3cf\") " pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.482982 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-catalog-content\") pod \"community-operators-p6t97\" (UID: \"0df0db64-7578-46b1-94ae-4e93f9b6e3cf\") " pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.483548 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-catalog-content\") pod \"community-operators-p6t97\" (UID: \"0df0db64-7578-46b1-94ae-4e93f9b6e3cf\") " pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.483619 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-utilities\") pod \"community-operators-p6t97\" (UID: \"0df0db64-7578-46b1-94ae-4e93f9b6e3cf\") " pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.502335 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m9kw\" (UniqueName: \"kubernetes.io/projected/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-kube-api-access-8m9kw\") pod \"community-operators-p6t97\" (UID: \"0df0db64-7578-46b1-94ae-4e93f9b6e3cf\") " pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:20 crc kubenswrapper[4849]: I1209 11:42:20.693526 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:21 crc kubenswrapper[4849]: I1209 11:42:21.045063 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6t97"] Dec 09 11:42:21 crc kubenswrapper[4849]: I1209 11:42:21.133320 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:42:21 crc kubenswrapper[4849]: I1209 11:42:21.133704 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:42:21 crc kubenswrapper[4849]: I1209 11:42:21.380248 4849 generic.go:334] "Generic (PLEG): container finished" podID="0df0db64-7578-46b1-94ae-4e93f9b6e3cf" containerID="73c5c5951beda15134ab6e3cecf8d200072b5ad2f8a3e6baba3216d4786511ba" exitCode=0 Dec 09 11:42:21 crc kubenswrapper[4849]: I1209 11:42:21.380289 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6t97" event={"ID":"0df0db64-7578-46b1-94ae-4e93f9b6e3cf","Type":"ContainerDied","Data":"73c5c5951beda15134ab6e3cecf8d200072b5ad2f8a3e6baba3216d4786511ba"} Dec 09 11:42:21 crc kubenswrapper[4849]: I1209 11:42:21.380316 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6t97" event={"ID":"0df0db64-7578-46b1-94ae-4e93f9b6e3cf","Type":"ContainerStarted","Data":"eb956c79bf5f55014248dd8f3264942f96fcea5613aa46daf3bc2ce740e1fa2d"} Dec 09 11:42:22 crc kubenswrapper[4849]: I1209 11:42:22.389713 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6t97" event={"ID":"0df0db64-7578-46b1-94ae-4e93f9b6e3cf","Type":"ContainerStarted","Data":"e6f3167d8cc4366a5fb8a568405681ae3bba112244d5485131fedce19bdf2bf6"} Dec 09 11:42:23 crc kubenswrapper[4849]: I1209 11:42:23.399867 4849 generic.go:334] "Generic (PLEG): container finished" podID="0df0db64-7578-46b1-94ae-4e93f9b6e3cf" containerID="e6f3167d8cc4366a5fb8a568405681ae3bba112244d5485131fedce19bdf2bf6" exitCode=0 Dec 09 11:42:23 crc kubenswrapper[4849]: I1209 11:42:23.399933 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6t97" event={"ID":"0df0db64-7578-46b1-94ae-4e93f9b6e3cf","Type":"ContainerDied","Data":"e6f3167d8cc4366a5fb8a568405681ae3bba112244d5485131fedce19bdf2bf6"} Dec 09 11:42:23 crc kubenswrapper[4849]: I1209 11:42:23.960404 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-bc7998764-95772"] Dec 09 11:42:23 crc kubenswrapper[4849]: I1209 11:42:23.961650 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-bc7998764-95772" Dec 09 11:42:23 crc kubenswrapper[4849]: I1209 11:42:23.968333 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-26n86" Dec 09 11:42:23 crc kubenswrapper[4849]: I1209 11:42:23.990622 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-bc7998764-95772"] Dec 09 11:42:24 crc kubenswrapper[4849]: I1209 11:42:24.141567 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nkqz\" (UniqueName: \"kubernetes.io/projected/6eb46d53-c911-45b6-b66d-982cb5e46f18-kube-api-access-5nkqz\") pod \"openstack-operator-controller-operator-bc7998764-95772\" (UID: \"6eb46d53-c911-45b6-b66d-982cb5e46f18\") " pod="openstack-operators/openstack-operator-controller-operator-bc7998764-95772" Dec 09 11:42:24 crc kubenswrapper[4849]: I1209 11:42:24.242817 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nkqz\" (UniqueName: \"kubernetes.io/projected/6eb46d53-c911-45b6-b66d-982cb5e46f18-kube-api-access-5nkqz\") pod \"openstack-operator-controller-operator-bc7998764-95772\" (UID: \"6eb46d53-c911-45b6-b66d-982cb5e46f18\") " pod="openstack-operators/openstack-operator-controller-operator-bc7998764-95772" Dec 09 11:42:24 crc kubenswrapper[4849]: I1209 11:42:24.265684 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nkqz\" (UniqueName: \"kubernetes.io/projected/6eb46d53-c911-45b6-b66d-982cb5e46f18-kube-api-access-5nkqz\") pod \"openstack-operator-controller-operator-bc7998764-95772\" (UID: \"6eb46d53-c911-45b6-b66d-982cb5e46f18\") " pod="openstack-operators/openstack-operator-controller-operator-bc7998764-95772" Dec 09 11:42:24 crc kubenswrapper[4849]: I1209 11:42:24.282931 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-bc7998764-95772" Dec 09 11:42:24 crc kubenswrapper[4849]: I1209 11:42:24.752493 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-bc7998764-95772"] Dec 09 11:42:24 crc kubenswrapper[4849]: W1209 11:42:24.757398 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eb46d53_c911_45b6_b66d_982cb5e46f18.slice/crio-0c482803c4039f039613c173611823c2c040501aba2581abb7329a542431d404 WatchSource:0}: Error finding container 0c482803c4039f039613c173611823c2c040501aba2581abb7329a542431d404: Status 404 returned error can't find the container with id 0c482803c4039f039613c173611823c2c040501aba2581abb7329a542431d404 Dec 09 11:42:25 crc kubenswrapper[4849]: I1209 11:42:25.469981 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-bc7998764-95772" event={"ID":"6eb46d53-c911-45b6-b66d-982cb5e46f18","Type":"ContainerStarted","Data":"0c482803c4039f039613c173611823c2c040501aba2581abb7329a542431d404"} Dec 09 11:42:25 crc kubenswrapper[4849]: I1209 11:42:25.472775 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6t97" event={"ID":"0df0db64-7578-46b1-94ae-4e93f9b6e3cf","Type":"ContainerStarted","Data":"73ec2904f2fa2ec2f34ba448a34e1a26810cbc9d792fbd8e5aefd2fd7d35c256"} Dec 09 11:42:28 crc kubenswrapper[4849]: I1209 11:42:28.595977 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p6t97" podStartSLOduration=5.195359974 podStartE2EDuration="8.595952692s" podCreationTimestamp="2025-12-09 11:42:20 +0000 UTC" firstStartedPulling="2025-12-09 11:42:21.381666142 +0000 UTC m=+923.921550458" lastFinishedPulling="2025-12-09 11:42:24.78225886 +0000 UTC m=+927.322143176" observedRunningTime="2025-12-09 11:42:25.497335824 +0000 UTC m=+928.037220140" watchObservedRunningTime="2025-12-09 11:42:28.595952692 +0000 UTC m=+931.135837028" Dec 09 11:42:30 crc kubenswrapper[4849]: I1209 11:42:30.694379 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:30 crc kubenswrapper[4849]: I1209 11:42:30.694766 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:30 crc kubenswrapper[4849]: I1209 11:42:30.767935 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:31 crc kubenswrapper[4849]: I1209 11:42:31.575033 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:33 crc kubenswrapper[4849]: I1209 11:42:33.732941 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6t97"] Dec 09 11:42:33 crc kubenswrapper[4849]: I1209 11:42:33.733215 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p6t97" podUID="0df0db64-7578-46b1-94ae-4e93f9b6e3cf" containerName="registry-server" containerID="cri-o://73ec2904f2fa2ec2f34ba448a34e1a26810cbc9d792fbd8e5aefd2fd7d35c256" gracePeriod=2 Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.146354 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.267124 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-utilities\") pod \"0df0db64-7578-46b1-94ae-4e93f9b6e3cf\" (UID: \"0df0db64-7578-46b1-94ae-4e93f9b6e3cf\") " Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.267181 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m9kw\" (UniqueName: \"kubernetes.io/projected/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-kube-api-access-8m9kw\") pod \"0df0db64-7578-46b1-94ae-4e93f9b6e3cf\" (UID: \"0df0db64-7578-46b1-94ae-4e93f9b6e3cf\") " Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.267275 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-catalog-content\") pod \"0df0db64-7578-46b1-94ae-4e93f9b6e3cf\" (UID: \"0df0db64-7578-46b1-94ae-4e93f9b6e3cf\") " Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.268067 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-utilities" (OuterVolumeSpecName: "utilities") pod "0df0db64-7578-46b1-94ae-4e93f9b6e3cf" (UID: "0df0db64-7578-46b1-94ae-4e93f9b6e3cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.287588 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-kube-api-access-8m9kw" (OuterVolumeSpecName: "kube-api-access-8m9kw") pod "0df0db64-7578-46b1-94ae-4e93f9b6e3cf" (UID: "0df0db64-7578-46b1-94ae-4e93f9b6e3cf"). InnerVolumeSpecName "kube-api-access-8m9kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.333001 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0df0db64-7578-46b1-94ae-4e93f9b6e3cf" (UID: "0df0db64-7578-46b1-94ae-4e93f9b6e3cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.369202 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.369239 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m9kw\" (UniqueName: \"kubernetes.io/projected/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-kube-api-access-8m9kw\") on node \"crc\" DevicePath \"\"" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.369256 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df0db64-7578-46b1-94ae-4e93f9b6e3cf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.538750 4849 generic.go:334] "Generic (PLEG): container finished" podID="0df0db64-7578-46b1-94ae-4e93f9b6e3cf" containerID="73ec2904f2fa2ec2f34ba448a34e1a26810cbc9d792fbd8e5aefd2fd7d35c256" exitCode=0 Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.538824 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6t97" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.548887 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-bc7998764-95772" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.549096 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-bc7998764-95772" event={"ID":"6eb46d53-c911-45b6-b66d-982cb5e46f18","Type":"ContainerStarted","Data":"6ea1ec438ade485c51fd6c64bc29fe0783982072003cac0b22bcf39556b2b264"} Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.549200 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6t97" event={"ID":"0df0db64-7578-46b1-94ae-4e93f9b6e3cf","Type":"ContainerDied","Data":"73ec2904f2fa2ec2f34ba448a34e1a26810cbc9d792fbd8e5aefd2fd7d35c256"} Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.549327 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6t97" event={"ID":"0df0db64-7578-46b1-94ae-4e93f9b6e3cf","Type":"ContainerDied","Data":"eb956c79bf5f55014248dd8f3264942f96fcea5613aa46daf3bc2ce740e1fa2d"} Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.549459 4849 scope.go:117] "RemoveContainer" containerID="73ec2904f2fa2ec2f34ba448a34e1a26810cbc9d792fbd8e5aefd2fd7d35c256" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.574694 4849 scope.go:117] "RemoveContainer" containerID="e6f3167d8cc4366a5fb8a568405681ae3bba112244d5485131fedce19bdf2bf6" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.591183 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-bc7998764-95772" podStartSLOduration=2.654538582 podStartE2EDuration="11.591158962s" podCreationTimestamp="2025-12-09 11:42:23 +0000 UTC" firstStartedPulling="2025-12-09 11:42:24.760164498 +0000 UTC m=+927.300048824" lastFinishedPulling="2025-12-09 11:42:33.696784888 +0000 UTC m=+936.236669204" observedRunningTime="2025-12-09 11:42:34.586030852 +0000 UTC m=+937.125915158" watchObservedRunningTime="2025-12-09 11:42:34.591158962 +0000 UTC m=+937.131043288" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.603977 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6t97"] Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.610148 4849 scope.go:117] "RemoveContainer" containerID="73c5c5951beda15134ab6e3cecf8d200072b5ad2f8a3e6baba3216d4786511ba" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.610989 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p6t97"] Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.625810 4849 scope.go:117] "RemoveContainer" containerID="73ec2904f2fa2ec2f34ba448a34e1a26810cbc9d792fbd8e5aefd2fd7d35c256" Dec 09 11:42:34 crc kubenswrapper[4849]: E1209 11:42:34.626129 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73ec2904f2fa2ec2f34ba448a34e1a26810cbc9d792fbd8e5aefd2fd7d35c256\": container with ID starting with 73ec2904f2fa2ec2f34ba448a34e1a26810cbc9d792fbd8e5aefd2fd7d35c256 not found: ID does not exist" containerID="73ec2904f2fa2ec2f34ba448a34e1a26810cbc9d792fbd8e5aefd2fd7d35c256" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.626159 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73ec2904f2fa2ec2f34ba448a34e1a26810cbc9d792fbd8e5aefd2fd7d35c256"} err="failed to get container status \"73ec2904f2fa2ec2f34ba448a34e1a26810cbc9d792fbd8e5aefd2fd7d35c256\": rpc error: code = NotFound desc = could not find container \"73ec2904f2fa2ec2f34ba448a34e1a26810cbc9d792fbd8e5aefd2fd7d35c256\": container with ID starting with 73ec2904f2fa2ec2f34ba448a34e1a26810cbc9d792fbd8e5aefd2fd7d35c256 not found: ID does not exist" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.626181 4849 scope.go:117] "RemoveContainer" containerID="e6f3167d8cc4366a5fb8a568405681ae3bba112244d5485131fedce19bdf2bf6" Dec 09 11:42:34 crc kubenswrapper[4849]: E1209 11:42:34.626371 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6f3167d8cc4366a5fb8a568405681ae3bba112244d5485131fedce19bdf2bf6\": container with ID starting with e6f3167d8cc4366a5fb8a568405681ae3bba112244d5485131fedce19bdf2bf6 not found: ID does not exist" containerID="e6f3167d8cc4366a5fb8a568405681ae3bba112244d5485131fedce19bdf2bf6" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.626450 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f3167d8cc4366a5fb8a568405681ae3bba112244d5485131fedce19bdf2bf6"} err="failed to get container status \"e6f3167d8cc4366a5fb8a568405681ae3bba112244d5485131fedce19bdf2bf6\": rpc error: code = NotFound desc = could not find container \"e6f3167d8cc4366a5fb8a568405681ae3bba112244d5485131fedce19bdf2bf6\": container with ID starting with e6f3167d8cc4366a5fb8a568405681ae3bba112244d5485131fedce19bdf2bf6 not found: ID does not exist" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.626468 4849 scope.go:117] "RemoveContainer" containerID="73c5c5951beda15134ab6e3cecf8d200072b5ad2f8a3e6baba3216d4786511ba" Dec 09 11:42:34 crc kubenswrapper[4849]: E1209 11:42:34.626674 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73c5c5951beda15134ab6e3cecf8d200072b5ad2f8a3e6baba3216d4786511ba\": container with ID starting with 73c5c5951beda15134ab6e3cecf8d200072b5ad2f8a3e6baba3216d4786511ba not found: ID does not exist" containerID="73c5c5951beda15134ab6e3cecf8d200072b5ad2f8a3e6baba3216d4786511ba" Dec 09 11:42:34 crc kubenswrapper[4849]: I1209 11:42:34.626693 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73c5c5951beda15134ab6e3cecf8d200072b5ad2f8a3e6baba3216d4786511ba"} err="failed to get container status \"73c5c5951beda15134ab6e3cecf8d200072b5ad2f8a3e6baba3216d4786511ba\": rpc error: code = NotFound desc = could not find container \"73c5c5951beda15134ab6e3cecf8d200072b5ad2f8a3e6baba3216d4786511ba\": container with ID starting with 73c5c5951beda15134ab6e3cecf8d200072b5ad2f8a3e6baba3216d4786511ba not found: ID does not exist" Dec 09 11:42:36 crc kubenswrapper[4849]: I1209 11:42:36.544907 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0df0db64-7578-46b1-94ae-4e93f9b6e3cf" path="/var/lib/kubelet/pods/0df0db64-7578-46b1-94ae-4e93f9b6e3cf/volumes" Dec 09 11:42:44 crc kubenswrapper[4849]: I1209 11:42:44.285730 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-bc7998764-95772" Dec 09 11:42:51 crc kubenswrapper[4849]: I1209 11:42:51.132979 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:42:51 crc kubenswrapper[4849]: I1209 11:42:51.133705 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:42:51 crc kubenswrapper[4849]: I1209 11:42:51.133776 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:42:51 crc kubenswrapper[4849]: I1209 11:42:51.134851 4849 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb7e27f11d509caaa9ebc587327526354751d200d66bddbb3fd44be26e61d13f"} pod="openshift-machine-config-operator/machine-config-daemon-89kpx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 11:42:51 crc kubenswrapper[4849]: I1209 11:42:51.134985 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" containerID="cri-o://fb7e27f11d509caaa9ebc587327526354751d200d66bddbb3fd44be26e61d13f" gracePeriod=600 Dec 09 11:42:51 crc kubenswrapper[4849]: I1209 11:42:51.668142 4849 generic.go:334] "Generic (PLEG): container finished" podID="157c6f6c-042b-4da3-934e-a08474e56486" containerID="fb7e27f11d509caaa9ebc587327526354751d200d66bddbb3fd44be26e61d13f" exitCode=0 Dec 09 11:42:51 crc kubenswrapper[4849]: I1209 11:42:51.668462 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerDied","Data":"fb7e27f11d509caaa9ebc587327526354751d200d66bddbb3fd44be26e61d13f"} Dec 09 11:42:51 crc kubenswrapper[4849]: I1209 11:42:51.668488 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerStarted","Data":"0a2af74fde05e47664890560ba0230403bcc6a0b200101e65907871ade0b4a58"} Dec 09 11:42:51 crc kubenswrapper[4849]: I1209 11:42:51.668503 4849 scope.go:117] "RemoveContainer" containerID="048beac97f1401b80a7107cf946bd8ac882621de80936787f6987e142986bbe4" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.102588 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-nkjhr"] Dec 09 11:43:10 crc kubenswrapper[4849]: E1209 11:43:10.103484 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df0db64-7578-46b1-94ae-4e93f9b6e3cf" containerName="extract-content" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.103501 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df0db64-7578-46b1-94ae-4e93f9b6e3cf" containerName="extract-content" Dec 09 11:43:10 crc kubenswrapper[4849]: E1209 11:43:10.103535 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df0db64-7578-46b1-94ae-4e93f9b6e3cf" containerName="registry-server" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.103543 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df0db64-7578-46b1-94ae-4e93f9b6e3cf" containerName="registry-server" Dec 09 11:43:10 crc kubenswrapper[4849]: E1209 11:43:10.103557 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df0db64-7578-46b1-94ae-4e93f9b6e3cf" containerName="extract-utilities" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.103565 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df0db64-7578-46b1-94ae-4e93f9b6e3cf" containerName="extract-utilities" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.103694 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="0df0db64-7578-46b1-94ae-4e93f9b6e3cf" containerName="registry-server" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.104493 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nkjhr" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.107125 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-vrh44" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.118122 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-ch4jh"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.121267 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ch4jh" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.124636 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-vzs6f" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.129385 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-nkjhr"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.140961 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-ch4jh"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.227308 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-hmntq"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.228603 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hmntq" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.239271 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-zdxph" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.243728 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52k5x\" (UniqueName: \"kubernetes.io/projected/f24fc0f5-c0b5-4155-874b-34f3cbb0ad25-kube-api-access-52k5x\") pod \"barbican-operator-controller-manager-7d9dfd778-nkjhr\" (UID: \"f24fc0f5-c0b5-4155-874b-34f3cbb0ad25\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nkjhr" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.243801 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw2w5\" (UniqueName: \"kubernetes.io/projected/93362b58-a33b-4683-ad57-6b72bb7d8655-kube-api-access-xw2w5\") pod \"cinder-operator-controller-manager-6c677c69b-ch4jh\" (UID: \"93362b58-a33b-4683-ad57-6b72bb7d8655\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ch4jh" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.243836 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj5pd\" (UniqueName: \"kubernetes.io/projected/526627f5-817a-4f47-a28c-cc3597989b1d-kube-api-access-sj5pd\") pod \"designate-operator-controller-manager-697fb699cf-hmntq\" (UID: \"526627f5-817a-4f47-a28c-cc3597989b1d\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hmntq" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.262949 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-hmntq"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.270291 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-xnt5q"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.271204 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xnt5q" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.291703 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-fvgmq" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.296560 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-xnt5q"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.314276 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-s6jnd"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.315228 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-s6jnd" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.319607 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-tw5zw" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.334821 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.335757 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.340232 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.340385 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-xb95x" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.345013 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert\") pod \"infra-operator-controller-manager-78d48bff9d-88dmp\" (UID: \"d2444ef1-caaa-4c1f-b3ac-a503b340bb87\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.345062 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw2w5\" (UniqueName: \"kubernetes.io/projected/93362b58-a33b-4683-ad57-6b72bb7d8655-kube-api-access-xw2w5\") pod \"cinder-operator-controller-manager-6c677c69b-ch4jh\" (UID: \"93362b58-a33b-4683-ad57-6b72bb7d8655\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ch4jh" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.345093 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj5pd\" (UniqueName: \"kubernetes.io/projected/526627f5-817a-4f47-a28c-cc3597989b1d-kube-api-access-sj5pd\") pod \"designate-operator-controller-manager-697fb699cf-hmntq\" (UID: \"526627f5-817a-4f47-a28c-cc3597989b1d\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hmntq" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.345191 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggftz\" (UniqueName: \"kubernetes.io/projected/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-kube-api-access-ggftz\") pod \"infra-operator-controller-manager-78d48bff9d-88dmp\" (UID: \"d2444ef1-caaa-4c1f-b3ac-a503b340bb87\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.345251 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7wdq\" (UniqueName: \"kubernetes.io/projected/9143dc55-4bce-4cfe-a704-73cf4e65c91f-kube-api-access-l7wdq\") pod \"heat-operator-controller-manager-5f64f6f8bb-s6jnd\" (UID: \"9143dc55-4bce-4cfe-a704-73cf4e65c91f\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-s6jnd" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.345295 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcts5\" (UniqueName: \"kubernetes.io/projected/577693e5-e4d7-4a4f-be14-41630da8744f-kube-api-access-gcts5\") pod \"glance-operator-controller-manager-5697bb5779-xnt5q\" (UID: \"577693e5-e4d7-4a4f-be14-41630da8744f\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xnt5q" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.345329 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52k5x\" (UniqueName: \"kubernetes.io/projected/f24fc0f5-c0b5-4155-874b-34f3cbb0ad25-kube-api-access-52k5x\") pod \"barbican-operator-controller-manager-7d9dfd778-nkjhr\" (UID: \"f24fc0f5-c0b5-4155-874b-34f3cbb0ad25\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nkjhr" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.352274 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-z694m"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.353292 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-z694m" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.365203 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-txfgb" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.365329 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-s6jnd"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.390483 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.394223 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw2w5\" (UniqueName: \"kubernetes.io/projected/93362b58-a33b-4683-ad57-6b72bb7d8655-kube-api-access-xw2w5\") pod \"cinder-operator-controller-manager-6c677c69b-ch4jh\" (UID: \"93362b58-a33b-4683-ad57-6b72bb7d8655\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ch4jh" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.405613 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52k5x\" (UniqueName: \"kubernetes.io/projected/f24fc0f5-c0b5-4155-874b-34f3cbb0ad25-kube-api-access-52k5x\") pod \"barbican-operator-controller-manager-7d9dfd778-nkjhr\" (UID: \"f24fc0f5-c0b5-4155-874b-34f3cbb0ad25\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nkjhr" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.413314 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-hzr9p"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.414327 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-hzr9p" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.418562 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-nv2wf" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.425173 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj5pd\" (UniqueName: \"kubernetes.io/projected/526627f5-817a-4f47-a28c-cc3597989b1d-kube-api-access-sj5pd\") pod \"designate-operator-controller-manager-697fb699cf-hmntq\" (UID: \"526627f5-817a-4f47-a28c-cc3597989b1d\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hmntq" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.425468 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nkjhr" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.431990 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-hzr9p"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.457480 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-z694m"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.458113 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggftz\" (UniqueName: \"kubernetes.io/projected/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-kube-api-access-ggftz\") pod \"infra-operator-controller-manager-78d48bff9d-88dmp\" (UID: \"d2444ef1-caaa-4c1f-b3ac-a503b340bb87\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.458139 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7wdq\" (UniqueName: \"kubernetes.io/projected/9143dc55-4bce-4cfe-a704-73cf4e65c91f-kube-api-access-l7wdq\") pod \"heat-operator-controller-manager-5f64f6f8bb-s6jnd\" (UID: \"9143dc55-4bce-4cfe-a704-73cf4e65c91f\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-s6jnd" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.458173 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcts5\" (UniqueName: \"kubernetes.io/projected/577693e5-e4d7-4a4f-be14-41630da8744f-kube-api-access-gcts5\") pod \"glance-operator-controller-manager-5697bb5779-xnt5q\" (UID: \"577693e5-e4d7-4a4f-be14-41630da8744f\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xnt5q" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.458214 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert\") pod \"infra-operator-controller-manager-78d48bff9d-88dmp\" (UID: \"d2444ef1-caaa-4c1f-b3ac-a503b340bb87\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:43:10 crc kubenswrapper[4849]: E1209 11:43:10.458321 4849 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 11:43:10 crc kubenswrapper[4849]: E1209 11:43:10.458369 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert podName:d2444ef1-caaa-4c1f-b3ac-a503b340bb87 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:10.958350438 +0000 UTC m=+973.498234754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert") pod "infra-operator-controller-manager-78d48bff9d-88dmp" (UID: "d2444ef1-caaa-4c1f-b3ac-a503b340bb87") : secret "infra-operator-webhook-server-cert" not found Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.460163 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ch4jh" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.472481 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.473492 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.513823 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xd799" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.520191 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7wdq\" (UniqueName: \"kubernetes.io/projected/9143dc55-4bce-4cfe-a704-73cf4e65c91f-kube-api-access-l7wdq\") pod \"heat-operator-controller-manager-5f64f6f8bb-s6jnd\" (UID: \"9143dc55-4bce-4cfe-a704-73cf4e65c91f\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-s6jnd" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.520270 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcts5\" (UniqueName: \"kubernetes.io/projected/577693e5-e4d7-4a4f-be14-41630da8744f-kube-api-access-gcts5\") pod \"glance-operator-controller-manager-5697bb5779-xnt5q\" (UID: \"577693e5-e4d7-4a4f-be14-41630da8744f\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xnt5q" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.520979 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggftz\" (UniqueName: \"kubernetes.io/projected/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-kube-api-access-ggftz\") pod \"infra-operator-controller-manager-78d48bff9d-88dmp\" (UID: \"d2444ef1-caaa-4c1f-b3ac-a503b340bb87\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.545790 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hmntq" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.550936 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-hr8b8"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.552054 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-hr8b8" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.557222 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-5wn7r" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.558999 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74dxl\" (UniqueName: \"kubernetes.io/projected/f891f270-493d-463a-9514-127200c5c495-kube-api-access-74dxl\") pod \"horizon-operator-controller-manager-68c6d99b8f-hzr9p\" (UID: \"f891f270-493d-463a-9514-127200c5c495\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-hzr9p" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.559084 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25m5l\" (UniqueName: \"kubernetes.io/projected/16904597-72e8-41f0-8810-cd75ff6af881-kube-api-access-25m5l\") pod \"ironic-operator-controller-manager-967d97867-z694m\" (UID: \"16904597-72e8-41f0-8810-cd75ff6af881\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-z694m" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.608927 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xnt5q" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.610198 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-8tvx7"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.611164 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-8tvx7" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.612905 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-gdnjm" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.626000 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-hr8b8"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.633798 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-s6jnd" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.639908 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.655313 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-8tvx7"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.659960 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzh6c\" (UniqueName: \"kubernetes.io/projected/f671b0c9-9b37-4150-a41c-7c95a969c149-kube-api-access-zzh6c\") pod \"manila-operator-controller-manager-5b5fd79c9c-hr8b8\" (UID: \"f671b0c9-9b37-4150-a41c-7c95a969c149\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-hr8b8" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.660039 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74dxl\" (UniqueName: \"kubernetes.io/projected/f891f270-493d-463a-9514-127200c5c495-kube-api-access-74dxl\") pod \"horizon-operator-controller-manager-68c6d99b8f-hzr9p\" (UID: \"f891f270-493d-463a-9514-127200c5c495\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-hzr9p" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.660146 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25m5l\" (UniqueName: \"kubernetes.io/projected/16904597-72e8-41f0-8810-cd75ff6af881-kube-api-access-25m5l\") pod \"ironic-operator-controller-manager-967d97867-z694m\" (UID: \"16904597-72e8-41f0-8810-cd75ff6af881\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-z694m" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.660176 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl2t6\" (UniqueName: \"kubernetes.io/projected/f51d531d-7b17-44e5-907d-9272df92466f-kube-api-access-pl2t6\") pod \"keystone-operator-controller-manager-7765d96ddf-ns9dz\" (UID: \"f51d531d-7b17-44e5-907d-9272df92466f\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.687891 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-5w7tw"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.713231 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-5w7tw" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.744850 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-62hm2" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.802457 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjwqs\" (UniqueName: \"kubernetes.io/projected/473b8be0-bc7e-4c51-ab9a-73771a1664c2-kube-api-access-vjwqs\") pod \"mariadb-operator-controller-manager-79c8c4686c-8tvx7\" (UID: \"473b8be0-bc7e-4c51-ab9a-73771a1664c2\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-8tvx7" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.802658 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl2t6\" (UniqueName: \"kubernetes.io/projected/f51d531d-7b17-44e5-907d-9272df92466f-kube-api-access-pl2t6\") pod \"keystone-operator-controller-manager-7765d96ddf-ns9dz\" (UID: \"f51d531d-7b17-44e5-907d-9272df92466f\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.802773 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfpp9\" (UniqueName: \"kubernetes.io/projected/40bac272-7e22-45e7-841c-7cdd4f87f1ad-kube-api-access-sfpp9\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-5w7tw\" (UID: \"40bac272-7e22-45e7-841c-7cdd4f87f1ad\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-5w7tw" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.802827 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzh6c\" (UniqueName: \"kubernetes.io/projected/f671b0c9-9b37-4150-a41c-7c95a969c149-kube-api-access-zzh6c\") pod \"manila-operator-controller-manager-5b5fd79c9c-hr8b8\" (UID: \"f671b0c9-9b37-4150-a41c-7c95a969c149\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-hr8b8" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.803882 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25m5l\" (UniqueName: \"kubernetes.io/projected/16904597-72e8-41f0-8810-cd75ff6af881-kube-api-access-25m5l\") pod \"ironic-operator-controller-manager-967d97867-z694m\" (UID: \"16904597-72e8-41f0-8810-cd75ff6af881\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-z694m" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.805454 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74dxl\" (UniqueName: \"kubernetes.io/projected/f891f270-493d-463a-9514-127200c5c495-kube-api-access-74dxl\") pod \"horizon-operator-controller-manager-68c6d99b8f-hzr9p\" (UID: \"f891f270-493d-463a-9514-127200c5c495\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-hzr9p" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.820754 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-z694m" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.832882 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-5wpm6"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.834545 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5wpm6" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.838967 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-92pft" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.861592 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-5w7tw"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.872878 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzh6c\" (UniqueName: \"kubernetes.io/projected/f671b0c9-9b37-4150-a41c-7c95a969c149-kube-api-access-zzh6c\") pod \"manila-operator-controller-manager-5b5fd79c9c-hr8b8\" (UID: \"f671b0c9-9b37-4150-a41c-7c95a969c149\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-hr8b8" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.905200 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfpp9\" (UniqueName: \"kubernetes.io/projected/40bac272-7e22-45e7-841c-7cdd4f87f1ad-kube-api-access-sfpp9\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-5w7tw\" (UID: \"40bac272-7e22-45e7-841c-7cdd4f87f1ad\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-5w7tw" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.905257 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjwqs\" (UniqueName: \"kubernetes.io/projected/473b8be0-bc7e-4c51-ab9a-73771a1664c2-kube-api-access-vjwqs\") pod \"mariadb-operator-controller-manager-79c8c4686c-8tvx7\" (UID: \"473b8be0-bc7e-4c51-ab9a-73771a1664c2\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-8tvx7" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.905293 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59ld9\" (UniqueName: \"kubernetes.io/projected/53f856a1-0579-4b0a-8294-a2ffb94bf4e5-kube-api-access-59ld9\") pod \"nova-operator-controller-manager-697bc559fc-5wpm6\" (UID: \"53f856a1-0579-4b0a-8294-a2ffb94bf4e5\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5wpm6" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.907024 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-4hsgp"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.908289 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-4hsgp" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.913788 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl2t6\" (UniqueName: \"kubernetes.io/projected/f51d531d-7b17-44e5-907d-9272df92466f-kube-api-access-pl2t6\") pod \"keystone-operator-controller-manager-7765d96ddf-ns9dz\" (UID: \"f51d531d-7b17-44e5-907d-9272df92466f\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.914084 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-hzr9p" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.934255 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-576jz" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.935446 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjwqs\" (UniqueName: \"kubernetes.io/projected/473b8be0-bc7e-4c51-ab9a-73771a1664c2-kube-api-access-vjwqs\") pod \"mariadb-operator-controller-manager-79c8c4686c-8tvx7\" (UID: \"473b8be0-bc7e-4c51-ab9a-73771a1664c2\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-8tvx7" Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.937160 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-5wpm6"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.972293 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-4hsgp"] Dec 09 11:43:10 crc kubenswrapper[4849]: I1209 11:43:10.972449 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.007494 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59ld9\" (UniqueName: \"kubernetes.io/projected/53f856a1-0579-4b0a-8294-a2ffb94bf4e5-kube-api-access-59ld9\") pod \"nova-operator-controller-manager-697bc559fc-5wpm6\" (UID: \"53f856a1-0579-4b0a-8294-a2ffb94bf4e5\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5wpm6" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.007769 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnpcg\" (UniqueName: \"kubernetes.io/projected/48eb886e-615e-419e-af3a-28e348e24a13-kube-api-access-rnpcg\") pod \"octavia-operator-controller-manager-998648c74-4hsgp\" (UID: \"48eb886e-615e-419e-af3a-28e348e24a13\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-4hsgp" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.007866 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert\") pod \"infra-operator-controller-manager-78d48bff9d-88dmp\" (UID: \"d2444ef1-caaa-4c1f-b3ac-a503b340bb87\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:43:11 crc kubenswrapper[4849]: E1209 11:43:11.008114 4849 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 11:43:11 crc kubenswrapper[4849]: E1209 11:43:11.008228 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert podName:d2444ef1-caaa-4c1f-b3ac-a503b340bb87 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:12.008209052 +0000 UTC m=+974.548093368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert") pod "infra-operator-controller-manager-78d48bff9d-88dmp" (UID: "d2444ef1-caaa-4c1f-b3ac-a503b340bb87") : secret "infra-operator-webhook-server-cert" not found Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.022866 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfpp9\" (UniqueName: \"kubernetes.io/projected/40bac272-7e22-45e7-841c-7cdd4f87f1ad-kube-api-access-sfpp9\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-5w7tw\" (UID: \"40bac272-7e22-45e7-841c-7cdd4f87f1ad\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-5w7tw" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.022955 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-hr8b8" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.028480 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.029710 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.040732 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-kbp9s" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.040906 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.070585 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.070845 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-8tvx7" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.076681 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-nn66x"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.090174 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-6xz62"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.091292 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.090377 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-nn66x" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.095912 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6xz62" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.097062 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.106609 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-nkpxl" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.106860 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-g4zrq" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.107011 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4gwfb" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.132142 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-5w7tw" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.132643 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-nn66x"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.138803 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db64r\" (UniqueName: \"kubernetes.io/projected/ed081101-9961-4cf9-9725-0ec764af322b-kube-api-access-db64r\") pod \"openstack-baremetal-operator-controller-manager-84b575879fm55d8\" (UID: \"ed081101-9961-4cf9-9725-0ec764af322b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.138930 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnpcg\" (UniqueName: \"kubernetes.io/projected/48eb886e-615e-419e-af3a-28e348e24a13-kube-api-access-rnpcg\") pod \"octavia-operator-controller-manager-998648c74-4hsgp\" (UID: \"48eb886e-615e-419e-af3a-28e348e24a13\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-4hsgp" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.138958 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fm55d8\" (UID: \"ed081101-9961-4cf9-9725-0ec764af322b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.145449 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-6xz62"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.170135 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59ld9\" (UniqueName: \"kubernetes.io/projected/53f856a1-0579-4b0a-8294-a2ffb94bf4e5-kube-api-access-59ld9\") pod \"nova-operator-controller-manager-697bc559fc-5wpm6\" (UID: \"53f856a1-0579-4b0a-8294-a2ffb94bf4e5\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5wpm6" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.178473 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.179301 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnpcg\" (UniqueName: \"kubernetes.io/projected/48eb886e-615e-419e-af3a-28e348e24a13-kube-api-access-rnpcg\") pod \"octavia-operator-controller-manager-998648c74-4hsgp\" (UID: \"48eb886e-615e-419e-af3a-28e348e24a13\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-4hsgp" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.184450 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5wpm6" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.193315 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.194298 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.202325 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-blvrs" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.212141 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-xrq2w"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.213472 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-xrq2w" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.218495 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.221741 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-xrq2w"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.225473 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-sjrdg" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.238520 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.239767 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.239819 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db64r\" (UniqueName: \"kubernetes.io/projected/ed081101-9961-4cf9-9725-0ec764af322b-kube-api-access-db64r\") pod \"openstack-baremetal-operator-controller-manager-84b575879fm55d8\" (UID: \"ed081101-9961-4cf9-9725-0ec764af322b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.240293 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52w46\" (UniqueName: \"kubernetes.io/projected/232105fe-9c4a-438e-bac7-0f13e78fb972-kube-api-access-52w46\") pod \"placement-operator-controller-manager-78f8948974-6xz62\" (UID: \"232105fe-9c4a-438e-bac7-0f13e78fb972\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-6xz62" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.240470 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs845\" (UniqueName: \"kubernetes.io/projected/42cdfefe-0e9c-4ff8-8447-5153ac692a2d-kube-api-access-cs845\") pod \"swift-operator-controller-manager-9d58d64bc-nn66x\" (UID: \"42cdfefe-0e9c-4ff8-8447-5153ac692a2d\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-nn66x" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.240774 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22xgh\" (UniqueName: \"kubernetes.io/projected/bc26bf04-a33a-4314-a0fa-216360ac6d3b-kube-api-access-22xgh\") pod \"ovn-operator-controller-manager-b6456fdb6-26rfq\" (UID: \"bc26bf04-a33a-4314-a0fa-216360ac6d3b\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.240872 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fm55d8\" (UID: \"ed081101-9961-4cf9-9725-0ec764af322b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:43:11 crc kubenswrapper[4849]: E1209 11:43:11.241146 4849 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 11:43:11 crc kubenswrapper[4849]: E1209 11:43:11.241286 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert podName:ed081101-9961-4cf9-9725-0ec764af322b nodeName:}" failed. No retries permitted until 2025-12-09 11:43:11.741266408 +0000 UTC m=+974.281150724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fm55d8" (UID: "ed081101-9961-4cf9-9725-0ec764af322b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.255008 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-fjjdn" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.266405 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.275238 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-4hsgp" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.316109 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db64r\" (UniqueName: \"kubernetes.io/projected/ed081101-9961-4cf9-9725-0ec764af322b-kube-api-access-db64r\") pod \"openstack-baremetal-operator-controller-manager-84b575879fm55d8\" (UID: \"ed081101-9961-4cf9-9725-0ec764af322b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.352124 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22xgh\" (UniqueName: \"kubernetes.io/projected/bc26bf04-a33a-4314-a0fa-216360ac6d3b-kube-api-access-22xgh\") pod \"ovn-operator-controller-manager-b6456fdb6-26rfq\" (UID: \"bc26bf04-a33a-4314-a0fa-216360ac6d3b\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.352195 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl4p2\" (UniqueName: \"kubernetes.io/projected/3cd2993d-bfa4-4aae-b11c-cdc46b9671da-kube-api-access-sl4p2\") pod \"watcher-operator-controller-manager-667bd8d554-k9vnf\" (UID: \"3cd2993d-bfa4-4aae-b11c-cdc46b9671da\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.352288 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrwdx\" (UniqueName: \"kubernetes.io/projected/9b389d0f-7f09-4744-b582-cf09ffe3c937-kube-api-access-xrwdx\") pod \"telemetry-operator-controller-manager-58d5ff84df-gnw95\" (UID: \"9b389d0f-7f09-4744-b582-cf09ffe3c937\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.352328 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52w46\" (UniqueName: \"kubernetes.io/projected/232105fe-9c4a-438e-bac7-0f13e78fb972-kube-api-access-52w46\") pod \"placement-operator-controller-manager-78f8948974-6xz62\" (UID: \"232105fe-9c4a-438e-bac7-0f13e78fb972\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-6xz62" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.352361 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs845\" (UniqueName: \"kubernetes.io/projected/42cdfefe-0e9c-4ff8-8447-5153ac692a2d-kube-api-access-cs845\") pod \"swift-operator-controller-manager-9d58d64bc-nn66x\" (UID: \"42cdfefe-0e9c-4ff8-8447-5153ac692a2d\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-nn66x" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.352427 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgpwt\" (UniqueName: \"kubernetes.io/projected/ab547409-b5b9-41ba-897d-01bd4d233906-kube-api-access-hgpwt\") pod \"test-operator-controller-manager-5854674fcc-xrq2w\" (UID: \"ab547409-b5b9-41ba-897d-01bd4d233906\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-xrq2w" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.364718 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.365728 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.399662 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.399881 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.404742 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-h6x58" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.412477 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs845\" (UniqueName: \"kubernetes.io/projected/42cdfefe-0e9c-4ff8-8447-5153ac692a2d-kube-api-access-cs845\") pod \"swift-operator-controller-manager-9d58d64bc-nn66x\" (UID: \"42cdfefe-0e9c-4ff8-8447-5153ac692a2d\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-nn66x" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.422482 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.424184 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22xgh\" (UniqueName: \"kubernetes.io/projected/bc26bf04-a33a-4314-a0fa-216360ac6d3b-kube-api-access-22xgh\") pod \"ovn-operator-controller-manager-b6456fdb6-26rfq\" (UID: \"bc26bf04-a33a-4314-a0fa-216360ac6d3b\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.433999 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52w46\" (UniqueName: \"kubernetes.io/projected/232105fe-9c4a-438e-bac7-0f13e78fb972-kube-api-access-52w46\") pod \"placement-operator-controller-manager-78f8948974-6xz62\" (UID: \"232105fe-9c4a-438e-bac7-0f13e78fb972\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-6xz62" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.458404 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68xj7\" (UniqueName: \"kubernetes.io/projected/6b911c78-1753-46a4-a042-c1395c2a73a9-kube-api-access-68xj7\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.458487 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgpwt\" (UniqueName: \"kubernetes.io/projected/ab547409-b5b9-41ba-897d-01bd4d233906-kube-api-access-hgpwt\") pod \"test-operator-controller-manager-5854674fcc-xrq2w\" (UID: \"ab547409-b5b9-41ba-897d-01bd4d233906\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-xrq2w" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.458522 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.458543 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.458572 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl4p2\" (UniqueName: \"kubernetes.io/projected/3cd2993d-bfa4-4aae-b11c-cdc46b9671da-kube-api-access-sl4p2\") pod \"watcher-operator-controller-manager-667bd8d554-k9vnf\" (UID: \"3cd2993d-bfa4-4aae-b11c-cdc46b9671da\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.458627 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrwdx\" (UniqueName: \"kubernetes.io/projected/9b389d0f-7f09-4744-b582-cf09ffe3c937-kube-api-access-xrwdx\") pod \"telemetry-operator-controller-manager-58d5ff84df-gnw95\" (UID: \"9b389d0f-7f09-4744-b582-cf09ffe3c937\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.515874 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgpwt\" (UniqueName: \"kubernetes.io/projected/ab547409-b5b9-41ba-897d-01bd4d233906-kube-api-access-hgpwt\") pod \"test-operator-controller-manager-5854674fcc-xrq2w\" (UID: \"ab547409-b5b9-41ba-897d-01bd4d233906\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-xrq2w" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.521498 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl4p2\" (UniqueName: \"kubernetes.io/projected/3cd2993d-bfa4-4aae-b11c-cdc46b9671da-kube-api-access-sl4p2\") pod \"watcher-operator-controller-manager-667bd8d554-k9vnf\" (UID: \"3cd2993d-bfa4-4aae-b11c-cdc46b9671da\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.528016 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-nn66x" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.558787 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6xz62" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.560023 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.560088 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.560220 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68xj7\" (UniqueName: \"kubernetes.io/projected/6b911c78-1753-46a4-a042-c1395c2a73a9-kube-api-access-68xj7\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:11 crc kubenswrapper[4849]: E1209 11:43:11.560942 4849 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 11:43:11 crc kubenswrapper[4849]: E1209 11:43:11.561013 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs podName:6b911c78-1753-46a4-a042-c1395c2a73a9 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:12.060986249 +0000 UTC m=+974.600870575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs") pod "openstack-operator-controller-manager-7cfb8477d8-j2tf9" (UID: "6b911c78-1753-46a4-a042-c1395c2a73a9") : secret "metrics-server-cert" not found Dec 09 11:43:11 crc kubenswrapper[4849]: E1209 11:43:11.561342 4849 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 11:43:11 crc kubenswrapper[4849]: E1209 11:43:11.561383 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs podName:6b911c78-1753-46a4-a042-c1395c2a73a9 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:12.061368399 +0000 UTC m=+974.601252715 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs") pod "openstack-operator-controller-manager-7cfb8477d8-j2tf9" (UID: "6b911c78-1753-46a4-a042-c1395c2a73a9") : secret "webhook-server-cert" not found Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.586098 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrwdx\" (UniqueName: \"kubernetes.io/projected/9b389d0f-7f09-4744-b582-cf09ffe3c937-kube-api-access-xrwdx\") pod \"telemetry-operator-controller-manager-58d5ff84df-gnw95\" (UID: \"9b389d0f-7f09-4744-b582-cf09ffe3c937\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.649324 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68xj7\" (UniqueName: \"kubernetes.io/projected/6b911c78-1753-46a4-a042-c1395c2a73a9-kube-api-access-68xj7\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.657082 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.726105 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-xrq2w" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.727018 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.756380 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.775958 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.787164 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-dvldw" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.817612 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn"] Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.838723 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fm55d8\" (UID: \"ed081101-9961-4cf9-9725-0ec764af322b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:43:11 crc kubenswrapper[4849]: E1209 11:43:11.839245 4849 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 11:43:11 crc kubenswrapper[4849]: E1209 11:43:11.839284 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert podName:ed081101-9961-4cf9-9725-0ec764af322b nodeName:}" failed. No retries permitted until 2025-12-09 11:43:12.839271395 +0000 UTC m=+975.379155711 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fm55d8" (UID: "ed081101-9961-4cf9-9725-0ec764af322b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.902053 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" Dec 09 11:43:11 crc kubenswrapper[4849]: I1209 11:43:11.940832 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfgzp\" (UniqueName: \"kubernetes.io/projected/3e922935-a9e7-49ab-bd10-f575e0ab0445-kube-api-access-sfgzp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rx9bn\" (UID: \"3e922935-a9e7-49ab-bd10-f575e0ab0445\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn" Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.043195 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfgzp\" (UniqueName: \"kubernetes.io/projected/3e922935-a9e7-49ab-bd10-f575e0ab0445-kube-api-access-sfgzp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rx9bn\" (UID: \"3e922935-a9e7-49ab-bd10-f575e0ab0445\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn" Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.043725 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert\") pod \"infra-operator-controller-manager-78d48bff9d-88dmp\" (UID: \"d2444ef1-caaa-4c1f-b3ac-a503b340bb87\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:43:12 crc kubenswrapper[4849]: E1209 11:43:12.043879 4849 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 11:43:12 crc kubenswrapper[4849]: E1209 11:43:12.043946 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert podName:d2444ef1-caaa-4c1f-b3ac-a503b340bb87 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:14.04392725 +0000 UTC m=+976.583811566 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert") pod "infra-operator-controller-manager-78d48bff9d-88dmp" (UID: "d2444ef1-caaa-4c1f-b3ac-a503b340bb87") : secret "infra-operator-webhook-server-cert" not found Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.099743 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfgzp\" (UniqueName: \"kubernetes.io/projected/3e922935-a9e7-49ab-bd10-f575e0ab0445-kube-api-access-sfgzp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rx9bn\" (UID: \"3e922935-a9e7-49ab-bd10-f575e0ab0445\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn" Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.147070 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.147138 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:12 crc kubenswrapper[4849]: E1209 11:43:12.147385 4849 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 11:43:12 crc kubenswrapper[4849]: E1209 11:43:12.147473 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs podName:6b911c78-1753-46a4-a042-c1395c2a73a9 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:13.147445652 +0000 UTC m=+975.687329968 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs") pod "openstack-operator-controller-manager-7cfb8477d8-j2tf9" (UID: "6b911c78-1753-46a4-a042-c1395c2a73a9") : secret "webhook-server-cert" not found Dec 09 11:43:12 crc kubenswrapper[4849]: E1209 11:43:12.147583 4849 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 11:43:12 crc kubenswrapper[4849]: E1209 11:43:12.147668 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs podName:6b911c78-1753-46a4-a042-c1395c2a73a9 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:13.147643617 +0000 UTC m=+975.687527993 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs") pod "openstack-operator-controller-manager-7cfb8477d8-j2tf9" (UID: "6b911c78-1753-46a4-a042-c1395c2a73a9") : secret "metrics-server-cert" not found Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.228739 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-s6jnd"] Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.257711 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-ch4jh"] Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.263381 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-nkjhr"] Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.284855 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn" Dec 09 11:43:12 crc kubenswrapper[4849]: W1209 11:43:12.352488 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf24fc0f5_c0b5_4155_874b_34f3cbb0ad25.slice/crio-10066ae311c37a79339a3bb835261ac017c6ca1c551262e0eaae23e4494953d8 WatchSource:0}: Error finding container 10066ae311c37a79339a3bb835261ac017c6ca1c551262e0eaae23e4494953d8: Status 404 returned error can't find the container with id 10066ae311c37a79339a3bb835261ac017c6ca1c551262e0eaae23e4494953d8 Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.703914 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-z694m"] Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.730947 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz"] Dec 09 11:43:12 crc kubenswrapper[4849]: W1209 11:43:12.754269 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf51d531d_7b17_44e5_907d_9272df92466f.slice/crio-d4b7d7a576825000edc3253d56a340d6b11f13b18476f3555d2dbfd1d5cb3543 WatchSource:0}: Error finding container d4b7d7a576825000edc3253d56a340d6b11f13b18476f3555d2dbfd1d5cb3543: Status 404 returned error can't find the container with id d4b7d7a576825000edc3253d56a340d6b11f13b18476f3555d2dbfd1d5cb3543 Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.795633 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-hmntq"] Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.805190 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-xnt5q"] Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.840682 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-hr8b8"] Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.857022 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nkjhr" event={"ID":"f24fc0f5-c0b5-4155-874b-34f3cbb0ad25","Type":"ContainerStarted","Data":"10066ae311c37a79339a3bb835261ac017c6ca1c551262e0eaae23e4494953d8"} Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.857955 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz" event={"ID":"f51d531d-7b17-44e5-907d-9272df92466f","Type":"ContainerStarted","Data":"d4b7d7a576825000edc3253d56a340d6b11f13b18476f3555d2dbfd1d5cb3543"} Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.859329 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ch4jh" event={"ID":"93362b58-a33b-4683-ad57-6b72bb7d8655","Type":"ContainerStarted","Data":"8df4beef5da6afff4a2f37fda2a874030783cd7b2627e43a8de079b56361073f"} Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.861447 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-s6jnd" event={"ID":"9143dc55-4bce-4cfe-a704-73cf4e65c91f","Type":"ContainerStarted","Data":"4d88da254474b40e89dc407018c076f6176b5d07caed06116446ccc48400efb5"} Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.862359 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xnt5q" event={"ID":"577693e5-e4d7-4a4f-be14-41630da8744f","Type":"ContainerStarted","Data":"a3d69692dfb2b739849cb59101de58049e1c906ad0d8803c81d428128dfaab45"} Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.866291 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hmntq" event={"ID":"526627f5-817a-4f47-a28c-cc3597989b1d","Type":"ContainerStarted","Data":"dbe7b55d7f1257cdc728428edb5f88f090dac0c2e5529fa93c356b19658a9929"} Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.870531 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fm55d8\" (UID: \"ed081101-9961-4cf9-9725-0ec764af322b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:43:12 crc kubenswrapper[4849]: E1209 11:43:12.870743 4849 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 11:43:12 crc kubenswrapper[4849]: E1209 11:43:12.870798 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert podName:ed081101-9961-4cf9-9725-0ec764af322b nodeName:}" failed. No retries permitted until 2025-12-09 11:43:14.870783581 +0000 UTC m=+977.410667897 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fm55d8" (UID: "ed081101-9961-4cf9-9725-0ec764af322b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.871566 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-hr8b8" event={"ID":"f671b0c9-9b37-4150-a41c-7c95a969c149","Type":"ContainerStarted","Data":"2163283a49f88b19a9ee951d9087a8b54e487d0fc8104d7a975d8c7489af14e5"} Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.872687 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-z694m" event={"ID":"16904597-72e8-41f0-8810-cd75ff6af881","Type":"ContainerStarted","Data":"7aaaa78d5c86484e63a9cbd0f240f71333795361869ce9bdb5e14aa1bdbce506"} Dec 09 11:43:12 crc kubenswrapper[4849]: I1209 11:43:12.898928 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-8tvx7"] Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.111387 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-nn66x"] Dec 09 11:43:13 crc kubenswrapper[4849]: W1209 11:43:13.120452 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42cdfefe_0e9c_4ff8_8447_5153ac692a2d.slice/crio-4ea5689cf44419612e244f3108353231017c58bbaed8397bdfb01687d820f3e2 WatchSource:0}: Error finding container 4ea5689cf44419612e244f3108353231017c58bbaed8397bdfb01687d820f3e2: Status 404 returned error can't find the container with id 4ea5689cf44419612e244f3108353231017c58bbaed8397bdfb01687d820f3e2 Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.133903 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-4hsgp"] Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.141344 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-5wpm6"] Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.161150 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-5w7tw"] Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.167402 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-hzr9p"] Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.176595 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.176657 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.176801 4849 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.176880 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs podName:6b911c78-1753-46a4-a042-c1395c2a73a9 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:15.176860492 +0000 UTC m=+977.716744868 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs") pod "openstack-operator-controller-manager-7cfb8477d8-j2tf9" (UID: "6b911c78-1753-46a4-a042-c1395c2a73a9") : secret "metrics-server-cert" not found Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.178479 4849 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.178626 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs podName:6b911c78-1753-46a4-a042-c1395c2a73a9 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:15.178604045 +0000 UTC m=+977.718488411 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs") pod "openstack-operator-controller-manager-7cfb8477d8-j2tf9" (UID: "6b911c78-1753-46a4-a042-c1395c2a73a9") : secret "webhook-server-cert" not found Dec 09 11:43:13 crc kubenswrapper[4849]: W1209 11:43:13.214297 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40bac272_7e22_45e7_841c_7cdd4f87f1ad.slice/crio-e7f3901f29d93ebdaaf2ce73a226d5bc5dc2672806aa5eb4ddd9558325753af6 WatchSource:0}: Error finding container e7f3901f29d93ebdaaf2ce73a226d5bc5dc2672806aa5eb4ddd9558325753af6: Status 404 returned error can't find the container with id e7f3901f29d93ebdaaf2ce73a226d5bc5dc2672806aa5eb4ddd9558325753af6 Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.338340 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-6xz62"] Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.354265 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95"] Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.367638 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-xrq2w"] Dec 09 11:43:13 crc kubenswrapper[4849]: W1209 11:43:13.381531 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b389d0f_7f09_4744_b582_cf09ffe3c937.slice/crio-8b79d89d4f4dfc91784cf3fd9d5708a13cf2485aeefd13fa3b8dfd18a72d4e2b WatchSource:0}: Error finding container 8b79d89d4f4dfc91784cf3fd9d5708a13cf2485aeefd13fa3b8dfd18a72d4e2b: Status 404 returned error can't find the container with id 8b79d89d4f4dfc91784cf3fd9d5708a13cf2485aeefd13fa3b8dfd18a72d4e2b Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.385121 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xrwdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-gnw95_openstack-operators(9b389d0f-7f09-4744-b582-cf09ffe3c937): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.391565 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xrwdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-gnw95_openstack-operators(9b389d0f-7f09-4744-b582-cf09ffe3c937): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.392663 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" podUID="9b389d0f-7f09-4744-b582-cf09ffe3c937" Dec 09 11:43:13 crc kubenswrapper[4849]: W1209 11:43:13.491872 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab547409_b5b9_41ba_897d_01bd4d233906.slice/crio-9491b6e283d62baf8a04140fa6cc41157769cb1df2520929b595343a90cd576b WatchSource:0}: Error finding container 9491b6e283d62baf8a04140fa6cc41157769cb1df2520929b595343a90cd576b: Status 404 returned error can't find the container with id 9491b6e283d62baf8a04140fa6cc41157769cb1df2520929b595343a90cd576b Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.586522 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq"] Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.592893 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf"] Dec 09 11:43:13 crc kubenswrapper[4849]: W1209 11:43:13.622839 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc26bf04_a33a_4314_a0fa_216360ac6d3b.slice/crio-32d5c24c60d97fb14e768ef1162b598dfebca9416cfbb1d15266b2ae2a6b8ff3 WatchSource:0}: Error finding container 32d5c24c60d97fb14e768ef1162b598dfebca9416cfbb1d15266b2ae2a6b8ff3: Status 404 returned error can't find the container with id 32d5c24c60d97fb14e768ef1162b598dfebca9416cfbb1d15266b2ae2a6b8ff3 Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.636840 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-22xgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-26rfq_openstack-operators(bc26bf04-a33a-4314-a0fa-216360ac6d3b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.640022 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-22xgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-26rfq_openstack-operators(bc26bf04-a33a-4314-a0fa-216360ac6d3b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.643723 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" podUID="bc26bf04-a33a-4314-a0fa-216360ac6d3b" Dec 09 11:43:13 crc kubenswrapper[4849]: W1209 11:43:13.649887 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cd2993d_bfa4_4aae_b11c_cdc46b9671da.slice/crio-a4252d99713a96842bb363c32afda52e28e5af7897526100ac0b2aef72e05538 WatchSource:0}: Error finding container a4252d99713a96842bb363c32afda52e28e5af7897526100ac0b2aef72e05538: Status 404 returned error can't find the container with id a4252d99713a96842bb363c32afda52e28e5af7897526100ac0b2aef72e05538 Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.651530 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn"] Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.654352 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sl4p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-k9vnf_openstack-operators(3cd2993d-bfa4-4aae-b11c-cdc46b9671da): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.660744 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sl4p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-k9vnf_openstack-operators(3cd2993d-bfa4-4aae-b11c-cdc46b9671da): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.662215 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" podUID="3cd2993d-bfa4-4aae-b11c-cdc46b9671da" Dec 09 11:43:13 crc kubenswrapper[4849]: W1209 11:43:13.669304 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e922935_a9e7_49ab_bd10_f575e0ab0445.slice/crio-fb216970a44d115f7b4fffca8f8a4944715355c2630045d6a2309b7e631c582a WatchSource:0}: Error finding container fb216970a44d115f7b4fffca8f8a4944715355c2630045d6a2309b7e631c582a: Status 404 returned error can't find the container with id fb216970a44d115f7b4fffca8f8a4944715355c2630045d6a2309b7e631c582a Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.674179 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sfgzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-rx9bn_openstack-operators(3e922935-a9e7-49ab-bd10-f575e0ab0445): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.675305 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn" podUID="3e922935-a9e7-49ab-bd10-f575e0ab0445" Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.881591 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" event={"ID":"3cd2993d-bfa4-4aae-b11c-cdc46b9671da","Type":"ContainerStarted","Data":"a4252d99713a96842bb363c32afda52e28e5af7897526100ac0b2aef72e05538"} Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.906530 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" podUID="3cd2993d-bfa4-4aae-b11c-cdc46b9671da" Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.906922 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn" event={"ID":"3e922935-a9e7-49ab-bd10-f575e0ab0445","Type":"ContainerStarted","Data":"fb216970a44d115f7b4fffca8f8a4944715355c2630045d6a2309b7e631c582a"} Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.912665 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn" podUID="3e922935-a9e7-49ab-bd10-f575e0ab0445" Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.921528 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" event={"ID":"bc26bf04-a33a-4314-a0fa-216360ac6d3b","Type":"ContainerStarted","Data":"32d5c24c60d97fb14e768ef1162b598dfebca9416cfbb1d15266b2ae2a6b8ff3"} Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.928052 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" podUID="bc26bf04-a33a-4314-a0fa-216360ac6d3b" Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.930402 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-5w7tw" event={"ID":"40bac272-7e22-45e7-841c-7cdd4f87f1ad","Type":"ContainerStarted","Data":"e7f3901f29d93ebdaaf2ce73a226d5bc5dc2672806aa5eb4ddd9558325753af6"} Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.934315 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-hzr9p" event={"ID":"f891f270-493d-463a-9514-127200c5c495","Type":"ContainerStarted","Data":"d8e3261e4f5f4da728a0921c1ef5401d7e8c5d4b73d1171df4df333084d50f0d"} Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.935902 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-8tvx7" event={"ID":"473b8be0-bc7e-4c51-ab9a-73771a1664c2","Type":"ContainerStarted","Data":"74a94567fe540c8b5f1fa060bcdcde3b651041b4b704ace217ebcddacd18c004"} Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.937817 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6xz62" event={"ID":"232105fe-9c4a-438e-bac7-0f13e78fb972","Type":"ContainerStarted","Data":"5109060d6a1815ff4491694b31e3e9055d95413031128de6c7ea203ebde8be62"} Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.941605 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-4hsgp" event={"ID":"48eb886e-615e-419e-af3a-28e348e24a13","Type":"ContainerStarted","Data":"2c309dbe1f38bc7f442caa399e8707c6be033fa018701656c24e8cf3accabab6"} Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.944931 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-nn66x" event={"ID":"42cdfefe-0e9c-4ff8-8447-5153ac692a2d","Type":"ContainerStarted","Data":"4ea5689cf44419612e244f3108353231017c58bbaed8397bdfb01687d820f3e2"} Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.960688 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-xrq2w" event={"ID":"ab547409-b5b9-41ba-897d-01bd4d233906","Type":"ContainerStarted","Data":"9491b6e283d62baf8a04140fa6cc41157769cb1df2520929b595343a90cd576b"} Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.979526 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" event={"ID":"9b389d0f-7f09-4744-b582-cf09ffe3c937","Type":"ContainerStarted","Data":"8b79d89d4f4dfc91784cf3fd9d5708a13cf2485aeefd13fa3b8dfd18a72d4e2b"} Dec 09 11:43:13 crc kubenswrapper[4849]: I1209 11:43:13.989962 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5wpm6" event={"ID":"53f856a1-0579-4b0a-8294-a2ffb94bf4e5","Type":"ContainerStarted","Data":"574c5fa832b73b38d8d5cf9010526da23968f4a9cdc586f552fe8d9d771a0a87"} Dec 09 11:43:13 crc kubenswrapper[4849]: E1209 11:43:13.994309 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" podUID="9b389d0f-7f09-4744-b582-cf09ffe3c937" Dec 09 11:43:14 crc kubenswrapper[4849]: I1209 11:43:14.095179 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert\") pod \"infra-operator-controller-manager-78d48bff9d-88dmp\" (UID: \"d2444ef1-caaa-4c1f-b3ac-a503b340bb87\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:43:14 crc kubenswrapper[4849]: E1209 11:43:14.096114 4849 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 11:43:14 crc kubenswrapper[4849]: E1209 11:43:14.096172 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert podName:d2444ef1-caaa-4c1f-b3ac-a503b340bb87 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:18.096153549 +0000 UTC m=+980.636037865 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert") pod "infra-operator-controller-manager-78d48bff9d-88dmp" (UID: "d2444ef1-caaa-4c1f-b3ac-a503b340bb87") : secret "infra-operator-webhook-server-cert" not found Dec 09 11:43:14 crc kubenswrapper[4849]: I1209 11:43:14.907744 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fm55d8\" (UID: \"ed081101-9961-4cf9-9725-0ec764af322b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:43:14 crc kubenswrapper[4849]: E1209 11:43:14.907902 4849 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 11:43:14 crc kubenswrapper[4849]: E1209 11:43:14.907970 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert podName:ed081101-9961-4cf9-9725-0ec764af322b nodeName:}" failed. No retries permitted until 2025-12-09 11:43:18.907954051 +0000 UTC m=+981.447838357 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fm55d8" (UID: "ed081101-9961-4cf9-9725-0ec764af322b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 11:43:15 crc kubenswrapper[4849]: E1209 11:43:15.015377 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn" podUID="3e922935-a9e7-49ab-bd10-f575e0ab0445" Dec 09 11:43:15 crc kubenswrapper[4849]: E1209 11:43:15.019929 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" podUID="3cd2993d-bfa4-4aae-b11c-cdc46b9671da" Dec 09 11:43:15 crc kubenswrapper[4849]: E1209 11:43:15.020229 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" podUID="9b389d0f-7f09-4744-b582-cf09ffe3c937" Dec 09 11:43:15 crc kubenswrapper[4849]: E1209 11:43:15.020338 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" podUID="bc26bf04-a33a-4314-a0fa-216360ac6d3b" Dec 09 11:43:15 crc kubenswrapper[4849]: I1209 11:43:15.223230 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:15 crc kubenswrapper[4849]: I1209 11:43:15.223296 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:15 crc kubenswrapper[4849]: E1209 11:43:15.223486 4849 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 11:43:15 crc kubenswrapper[4849]: E1209 11:43:15.223549 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs podName:6b911c78-1753-46a4-a042-c1395c2a73a9 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:19.223535276 +0000 UTC m=+981.763419592 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs") pod "openstack-operator-controller-manager-7cfb8477d8-j2tf9" (UID: "6b911c78-1753-46a4-a042-c1395c2a73a9") : secret "webhook-server-cert" not found Dec 09 11:43:15 crc kubenswrapper[4849]: E1209 11:43:15.224202 4849 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 11:43:15 crc kubenswrapper[4849]: E1209 11:43:15.224389 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs podName:6b911c78-1753-46a4-a042-c1395c2a73a9 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:19.224304756 +0000 UTC m=+981.764189102 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs") pod "openstack-operator-controller-manager-7cfb8477d8-j2tf9" (UID: "6b911c78-1753-46a4-a042-c1395c2a73a9") : secret "metrics-server-cert" not found Dec 09 11:43:18 crc kubenswrapper[4849]: I1209 11:43:18.195700 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert\") pod \"infra-operator-controller-manager-78d48bff9d-88dmp\" (UID: \"d2444ef1-caaa-4c1f-b3ac-a503b340bb87\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:43:18 crc kubenswrapper[4849]: E1209 11:43:18.196085 4849 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 11:43:18 crc kubenswrapper[4849]: E1209 11:43:18.196132 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert podName:d2444ef1-caaa-4c1f-b3ac-a503b340bb87 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:26.196117964 +0000 UTC m=+988.736002280 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert") pod "infra-operator-controller-manager-78d48bff9d-88dmp" (UID: "d2444ef1-caaa-4c1f-b3ac-a503b340bb87") : secret "infra-operator-webhook-server-cert" not found Dec 09 11:43:18 crc kubenswrapper[4849]: I1209 11:43:18.909819 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fm55d8\" (UID: \"ed081101-9961-4cf9-9725-0ec764af322b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:43:18 crc kubenswrapper[4849]: E1209 11:43:18.910783 4849 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 11:43:18 crc kubenswrapper[4849]: E1209 11:43:18.910906 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert podName:ed081101-9961-4cf9-9725-0ec764af322b nodeName:}" failed. No retries permitted until 2025-12-09 11:43:26.91089129 +0000 UTC m=+989.450775606 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fm55d8" (UID: "ed081101-9961-4cf9-9725-0ec764af322b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 11:43:19 crc kubenswrapper[4849]: I1209 11:43:19.317077 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:19 crc kubenswrapper[4849]: I1209 11:43:19.317222 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:19 crc kubenswrapper[4849]: E1209 11:43:19.317383 4849 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 11:43:19 crc kubenswrapper[4849]: E1209 11:43:19.317457 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs podName:6b911c78-1753-46a4-a042-c1395c2a73a9 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:27.317442612 +0000 UTC m=+989.857326928 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs") pod "openstack-operator-controller-manager-7cfb8477d8-j2tf9" (UID: "6b911c78-1753-46a4-a042-c1395c2a73a9") : secret "webhook-server-cert" not found Dec 09 11:43:19 crc kubenswrapper[4849]: E1209 11:43:19.317788 4849 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 11:43:19 crc kubenswrapper[4849]: E1209 11:43:19.317813 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs podName:6b911c78-1753-46a4-a042-c1395c2a73a9 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:27.317805912 +0000 UTC m=+989.857690228 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs") pod "openstack-operator-controller-manager-7cfb8477d8-j2tf9" (UID: "6b911c78-1753-46a4-a042-c1395c2a73a9") : secret "metrics-server-cert" not found Dec 09 11:43:26 crc kubenswrapper[4849]: I1209 11:43:26.225143 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert\") pod \"infra-operator-controller-manager-78d48bff9d-88dmp\" (UID: \"d2444ef1-caaa-4c1f-b3ac-a503b340bb87\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:43:26 crc kubenswrapper[4849]: E1209 11:43:26.225309 4849 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 11:43:26 crc kubenswrapper[4849]: E1209 11:43:26.225880 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert podName:d2444ef1-caaa-4c1f-b3ac-a503b340bb87 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:42.225858854 +0000 UTC m=+1004.765743170 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert") pod "infra-operator-controller-manager-78d48bff9d-88dmp" (UID: "d2444ef1-caaa-4c1f-b3ac-a503b340bb87") : secret "infra-operator-webhook-server-cert" not found Dec 09 11:43:26 crc kubenswrapper[4849]: E1209 11:43:26.832609 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 09 11:43:26 crc kubenswrapper[4849]: E1209 11:43:26.832816 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rnpcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-4hsgp_openstack-operators(48eb886e-615e-419e-af3a-28e348e24a13): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:26 crc kubenswrapper[4849]: I1209 11:43:26.937022 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fm55d8\" (UID: \"ed081101-9961-4cf9-9725-0ec764af322b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:43:26 crc kubenswrapper[4849]: E1209 11:43:26.937317 4849 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 11:43:26 crc kubenswrapper[4849]: E1209 11:43:26.937533 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert podName:ed081101-9961-4cf9-9725-0ec764af322b nodeName:}" failed. No retries permitted until 2025-12-09 11:43:42.937511513 +0000 UTC m=+1005.477395829 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fm55d8" (UID: "ed081101-9961-4cf9-9725-0ec764af322b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 11:43:27 crc kubenswrapper[4849]: I1209 11:43:27.351382 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:27 crc kubenswrapper[4849]: I1209 11:43:27.351457 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:27 crc kubenswrapper[4849]: E1209 11:43:27.351594 4849 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 11:43:27 crc kubenswrapper[4849]: E1209 11:43:27.351647 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs podName:6b911c78-1753-46a4-a042-c1395c2a73a9 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:43.351629602 +0000 UTC m=+1005.891513928 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs") pod "openstack-operator-controller-manager-7cfb8477d8-j2tf9" (UID: "6b911c78-1753-46a4-a042-c1395c2a73a9") : secret "webhook-server-cert" not found Dec 09 11:43:27 crc kubenswrapper[4849]: E1209 11:43:27.352264 4849 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 11:43:27 crc kubenswrapper[4849]: E1209 11:43:27.352304 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs podName:6b911c78-1753-46a4-a042-c1395c2a73a9 nodeName:}" failed. No retries permitted until 2025-12-09 11:43:43.352291649 +0000 UTC m=+1005.892175965 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs") pod "openstack-operator-controller-manager-7cfb8477d8-j2tf9" (UID: "6b911c78-1753-46a4-a042-c1395c2a73a9") : secret "metrics-server-cert" not found Dec 09 11:43:30 crc kubenswrapper[4849]: E1209 11:43:30.015333 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991" Dec 09 11:43:30 crc kubenswrapper[4849]: E1209 11:43:30.016771 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cs845,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-nn66x_openstack-operators(42cdfefe-0e9c-4ff8-8447-5153ac692a2d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:30 crc kubenswrapper[4849]: I1209 11:43:30.027690 4849 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 11:43:31 crc kubenswrapper[4849]: E1209 11:43:31.135921 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 09 11:43:31 crc kubenswrapper[4849]: E1209 11:43:31.136145 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-59ld9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-5wpm6_openstack-operators(53f856a1-0579-4b0a-8294-a2ffb94bf4e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:32 crc kubenswrapper[4849]: E1209 11:43:32.179546 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a" Dec 09 11:43:32 crc kubenswrapper[4849]: E1209 11:43:32.180147 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zzh6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-hr8b8_openstack-operators(f671b0c9-9b37-4150-a41c-7c95a969c149): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:33 crc kubenswrapper[4849]: E1209 11:43:33.879218 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 09 11:43:33 crc kubenswrapper[4849]: E1209 11:43:33.879509 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sfpp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-5w7tw_openstack-operators(40bac272-7e22-45e7-841c-7cdd4f87f1ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:34 crc kubenswrapper[4849]: E1209 11:43:34.597554 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a" Dec 09 11:43:34 crc kubenswrapper[4849]: E1209 11:43:34.598269 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sj5pd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-697fb699cf-hmntq_openstack-operators(526627f5-817a-4f47-a28c-cc3597989b1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:35 crc kubenswrapper[4849]: E1209 11:43:35.215226 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 09 11:43:35 crc kubenswrapper[4849]: E1209 11:43:35.215474 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l7wdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-s6jnd_openstack-operators(9143dc55-4bce-4cfe-a704-73cf4e65c91f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:35 crc kubenswrapper[4849]: E1209 11:43:35.893341 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 09 11:43:35 crc kubenswrapper[4849]: E1209 11:43:35.893595 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-52w46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-6xz62_openstack-operators(232105fe-9c4a-438e-bac7-0f13e78fb972): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:36 crc kubenswrapper[4849]: E1209 11:43:36.634639 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87" Dec 09 11:43:36 crc kubenswrapper[4849]: E1209 11:43:36.635244 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25m5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-967d97867-z694m_openstack-operators(16904597-72e8-41f0-8810-cd75ff6af881): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:37 crc kubenswrapper[4849]: E1209 11:43:37.256679 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3" Dec 09 11:43:37 crc kubenswrapper[4849]: E1209 11:43:37.258547 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xw2w5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6c677c69b-ch4jh_openstack-operators(93362b58-a33b-4683-ad57-6b72bb7d8655): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:42 crc kubenswrapper[4849]: I1209 11:43:42.232681 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert\") pod \"infra-operator-controller-manager-78d48bff9d-88dmp\" (UID: \"d2444ef1-caaa-4c1f-b3ac-a503b340bb87\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:43:42 crc kubenswrapper[4849]: I1209 11:43:42.247280 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2444ef1-caaa-4c1f-b3ac-a503b340bb87-cert\") pod \"infra-operator-controller-manager-78d48bff9d-88dmp\" (UID: \"d2444ef1-caaa-4c1f-b3ac-a503b340bb87\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:43:42 crc kubenswrapper[4849]: I1209 11:43:42.464549 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:43:42 crc kubenswrapper[4849]: I1209 11:43:42.943104 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fm55d8\" (UID: \"ed081101-9961-4cf9-9725-0ec764af322b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:43:42 crc kubenswrapper[4849]: I1209 11:43:42.957266 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed081101-9961-4cf9-9725-0ec764af322b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fm55d8\" (UID: \"ed081101-9961-4cf9-9725-0ec764af322b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:43:43 crc kubenswrapper[4849]: I1209 11:43:43.245954 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:43:43 crc kubenswrapper[4849]: I1209 11:43:43.449452 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:43 crc kubenswrapper[4849]: I1209 11:43:43.449514 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:43 crc kubenswrapper[4849]: I1209 11:43:43.457212 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-metrics-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:43 crc kubenswrapper[4849]: I1209 11:43:43.458146 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b911c78-1753-46a4-a042-c1395c2a73a9-webhook-certs\") pod \"openstack-operator-controller-manager-7cfb8477d8-j2tf9\" (UID: \"6b911c78-1753-46a4-a042-c1395c2a73a9\") " pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:43 crc kubenswrapper[4849]: I1209 11:43:43.731181 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:43 crc kubenswrapper[4849]: E1209 11:43:43.759150 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027" Dec 09 11:43:43 crc kubenswrapper[4849]: E1209 11:43:43.759493 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gcts5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-5697bb5779-xnt5q_openstack-operators(577693e5-e4d7-4a4f-be14-41630da8744f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:44 crc kubenswrapper[4849]: E1209 11:43:44.639115 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8" Dec 09 11:43:44 crc kubenswrapper[4849]: E1209 11:43:44.639624 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sl4p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-k9vnf_openstack-operators(3cd2993d-bfa4-4aae-b11c-cdc46b9671da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:46 crc kubenswrapper[4849]: E1209 11:43:46.468766 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 09 11:43:46 crc kubenswrapper[4849]: E1209 11:43:46.469119 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-22xgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-26rfq_openstack-operators(bc26bf04-a33a-4314-a0fa-216360ac6d3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:47 crc kubenswrapper[4849]: E1209 11:43:47.343334 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f" Dec 09 11:43:47 crc kubenswrapper[4849]: E1209 11:43:47.344307 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xrwdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-gnw95_openstack-operators(9b389d0f-7f09-4744-b582-cf09ffe3c937): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:49 crc kubenswrapper[4849]: E1209 11:43:49.768483 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 09 11:43:49 crc kubenswrapper[4849]: E1209 11:43:49.768957 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pl2t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-ns9dz_openstack-operators(f51d531d-7b17-44e5-907d-9272df92466f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:50 crc kubenswrapper[4849]: E1209 11:43:50.441205 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 09 11:43:50 crc kubenswrapper[4849]: E1209 11:43:50.442246 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sfgzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-rx9bn_openstack-operators(3e922935-a9e7-49ab-bd10-f575e0ab0445): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:43:50 crc kubenswrapper[4849]: E1209 11:43:50.444509 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn" podUID="3e922935-a9e7-49ab-bd10-f575e0ab0445" Dec 09 11:43:51 crc kubenswrapper[4849]: I1209 11:43:51.118934 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp"] Dec 09 11:43:51 crc kubenswrapper[4849]: I1209 11:43:51.144313 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8"] Dec 09 11:43:51 crc kubenswrapper[4849]: I1209 11:43:51.355655 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9"] Dec 09 11:43:52 crc kubenswrapper[4849]: I1209 11:43:52.303713 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-xrq2w" event={"ID":"ab547409-b5b9-41ba-897d-01bd4d233906","Type":"ContainerStarted","Data":"df9a63100be60f6478b6ebc7d699d3989bca0de43fc53b67008b5433b0139485"} Dec 09 11:43:52 crc kubenswrapper[4849]: I1209 11:43:52.305625 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" event={"ID":"6b911c78-1753-46a4-a042-c1395c2a73a9","Type":"ContainerStarted","Data":"43dea0176b01b22c6fbb01f3ddc3d2e7d52bf6e1063dc722b5ab291aa91659c6"} Dec 09 11:43:52 crc kubenswrapper[4849]: I1209 11:43:52.308533 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" event={"ID":"ed081101-9961-4cf9-9725-0ec764af322b","Type":"ContainerStarted","Data":"3b7fe3c15ec3b54517d7af6da5670e0eb5f5b6a2167f3f1878362c5e008d26c7"} Dec 09 11:43:52 crc kubenswrapper[4849]: I1209 11:43:52.311194 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" event={"ID":"d2444ef1-caaa-4c1f-b3ac-a503b340bb87","Type":"ContainerStarted","Data":"14a97be53bb4c1e79fb24a3d6d15d788574a69f6bde411c0b2e7151c8c0c55b2"} Dec 09 11:43:53 crc kubenswrapper[4849]: I1209 11:43:53.320892 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-hzr9p" event={"ID":"f891f270-493d-463a-9514-127200c5c495","Type":"ContainerStarted","Data":"b27abdb38fd6c6213f4fccd62f9df3c0254627d147cf729596ba74b05f2479e7"} Dec 09 11:43:53 crc kubenswrapper[4849]: I1209 11:43:53.323310 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-8tvx7" event={"ID":"473b8be0-bc7e-4c51-ab9a-73771a1664c2","Type":"ContainerStarted","Data":"67620feb4f7afbb4917f9b415de0eff73075696e9bf5623e3aeae01610490707"} Dec 09 11:43:55 crc kubenswrapper[4849]: I1209 11:43:55.337337 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nkjhr" event={"ID":"f24fc0f5-c0b5-4155-874b-34f3cbb0ad25","Type":"ContainerStarted","Data":"8a844d45f6848e7d74408f65d80e6be5d58ce7781854b5ff72ee5cf2ce2eb2c9"} Dec 09 11:43:56 crc kubenswrapper[4849]: I1209 11:43:56.346280 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" event={"ID":"6b911c78-1753-46a4-a042-c1395c2a73a9","Type":"ContainerStarted","Data":"9f9b0393d7287b0c5a2980f4ed66e9200ce1ae069208d412f1f11b380da9710e"} Dec 09 11:43:56 crc kubenswrapper[4849]: I1209 11:43:56.346854 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:43:56 crc kubenswrapper[4849]: I1209 11:43:56.390208 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" podStartSLOduration=45.390184721 podStartE2EDuration="45.390184721s" podCreationTimestamp="2025-12-09 11:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:43:56.384168332 +0000 UTC m=+1018.924052668" watchObservedRunningTime="2025-12-09 11:43:56.390184721 +0000 UTC m=+1018.930069037" Dec 09 11:43:57 crc kubenswrapper[4849]: E1209 11:43:57.943899 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-967d97867-z694m" podUID="16904597-72e8-41f0-8810-cd75ff6af881" Dec 09 11:43:57 crc kubenswrapper[4849]: E1209 11:43:57.964012 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" podUID="bc26bf04-a33a-4314-a0fa-216360ac6d3b" Dec 09 11:43:57 crc kubenswrapper[4849]: E1209 11:43:57.979878 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hmntq" podUID="526627f5-817a-4f47-a28c-cc3597989b1d" Dec 09 11:43:58 crc kubenswrapper[4849]: E1209 11:43:58.045206 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-s6jnd" podUID="9143dc55-4bce-4cfe-a704-73cf4e65c91f" Dec 09 11:43:58 crc kubenswrapper[4849]: E1209 11:43:58.125757 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ch4jh" podUID="93362b58-a33b-4683-ad57-6b72bb7d8655" Dec 09 11:43:58 crc kubenswrapper[4849]: I1209 11:43:58.373484 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz" event={"ID":"f51d531d-7b17-44e5-907d-9272df92466f","Type":"ContainerStarted","Data":"6fe899f186f1f14112bf9c997c499406a459f14d7e3b17d0c9fe4dedabb5ebed"} Dec 09 11:43:58 crc kubenswrapper[4849]: I1209 11:43:58.382645 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nkjhr" event={"ID":"f24fc0f5-c0b5-4155-874b-34f3cbb0ad25","Type":"ContainerStarted","Data":"d1c881724a6721222b6fa3d0b79e5e47b2d05466a0e5b3774bd7a0ba03df5739"} Dec 09 11:43:58 crc kubenswrapper[4849]: I1209 11:43:58.382991 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nkjhr" Dec 09 11:43:58 crc kubenswrapper[4849]: I1209 11:43:58.387016 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" event={"ID":"3cd2993d-bfa4-4aae-b11c-cdc46b9671da","Type":"ContainerStarted","Data":"d05777997198413e82435fca02a3bd75a3c6654cfa5e9e575c720c69d69fa608"} Dec 09 11:43:58 crc kubenswrapper[4849]: I1209 11:43:58.389203 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ch4jh" event={"ID":"93362b58-a33b-4683-ad57-6b72bb7d8655","Type":"ContainerStarted","Data":"26285a7284d339ea878da5558f47d74036b8d398d14aa68530140556e9f199b9"} Dec 09 11:43:58 crc kubenswrapper[4849]: I1209 11:43:58.394733 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-z694m" event={"ID":"16904597-72e8-41f0-8810-cd75ff6af881","Type":"ContainerStarted","Data":"ff72b9d87a03e152664e0c124d841d40c435371491a9d2a5859541408ea1f2f5"} Dec 09 11:43:58 crc kubenswrapper[4849]: I1209 11:43:58.398446 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-s6jnd" event={"ID":"9143dc55-4bce-4cfe-a704-73cf4e65c91f","Type":"ContainerStarted","Data":"70bb3ece853f66c7111efb4c74dfae3fdb328f3623d6f8375008ffe1a7be86c1"} Dec 09 11:43:58 crc kubenswrapper[4849]: I1209 11:43:58.412591 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-8tvx7" event={"ID":"473b8be0-bc7e-4c51-ab9a-73771a1664c2","Type":"ContainerStarted","Data":"595c351b3e5281e9d36cf1d42ffce4119a92700f873f58c82969fd2103565e78"} Dec 09 11:43:58 crc kubenswrapper[4849]: I1209 11:43:58.413489 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-8tvx7" Dec 09 11:43:58 crc kubenswrapper[4849]: I1209 11:43:58.426400 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nkjhr" podStartSLOduration=3.17116863 podStartE2EDuration="48.426376086s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:12.373374556 +0000 UTC m=+974.913258872" lastFinishedPulling="2025-12-09 11:43:57.628582012 +0000 UTC m=+1020.168466328" observedRunningTime="2025-12-09 11:43:58.420175853 +0000 UTC m=+1020.960060179" watchObservedRunningTime="2025-12-09 11:43:58.426376086 +0000 UTC m=+1020.966260412" Dec 09 11:43:58 crc kubenswrapper[4849]: I1209 11:43:58.429852 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-8tvx7" Dec 09 11:43:58 crc kubenswrapper[4849]: I1209 11:43:58.434446 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hmntq" event={"ID":"526627f5-817a-4f47-a28c-cc3597989b1d","Type":"ContainerStarted","Data":"a3b831e7d4d01d65582183bf069d58199d7ef32e6d84c995d6a437559e53b0ff"} Dec 09 11:43:58 crc kubenswrapper[4849]: I1209 11:43:58.452635 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" event={"ID":"bc26bf04-a33a-4314-a0fa-216360ac6d3b","Type":"ContainerStarted","Data":"a22a2fb07b05b04527ed343fe599a93d0a2ab3f602ae44469d9dd1c781c8d311"} Dec 09 11:43:58 crc kubenswrapper[4849]: E1209 11:43:58.457251 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" podUID="bc26bf04-a33a-4314-a0fa-216360ac6d3b" Dec 09 11:43:58 crc kubenswrapper[4849]: I1209 11:43:58.517908 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-8tvx7" podStartSLOduration=3.724412191 podStartE2EDuration="48.517891667s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:12.915738036 +0000 UTC m=+975.455622352" lastFinishedPulling="2025-12-09 11:43:57.709217512 +0000 UTC m=+1020.249101828" observedRunningTime="2025-12-09 11:43:58.477685219 +0000 UTC m=+1021.017569535" watchObservedRunningTime="2025-12-09 11:43:58.517891667 +0000 UTC m=+1021.057775993" Dec 09 11:43:58 crc kubenswrapper[4849]: E1209 11:43:58.538807 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" podUID="3cd2993d-bfa4-4aae-b11c-cdc46b9671da" Dec 09 11:43:58 crc kubenswrapper[4849]: E1209 11:43:58.545676 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz" podUID="f51d531d-7b17-44e5-907d-9272df92466f" Dec 09 11:43:58 crc kubenswrapper[4849]: E1209 11:43:58.602210 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-5w7tw" podUID="40bac272-7e22-45e7-841c-7cdd4f87f1ad" Dec 09 11:43:58 crc kubenswrapper[4849]: E1209 11:43:58.615947 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-nn66x" podUID="42cdfefe-0e9c-4ff8-8447-5153ac692a2d" Dec 09 11:43:59 crc kubenswrapper[4849]: E1209 11:43:59.186086 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" podUID="9b389d0f-7f09-4744-b582-cf09ffe3c937" Dec 09 11:43:59 crc kubenswrapper[4849]: E1209 11:43:59.437348 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-hr8b8" podUID="f671b0c9-9b37-4150-a41c-7c95a969c149" Dec 09 11:43:59 crc kubenswrapper[4849]: I1209 11:43:59.484027 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-5w7tw" event={"ID":"40bac272-7e22-45e7-841c-7cdd4f87f1ad","Type":"ContainerStarted","Data":"d9e0e86730a7782db814ccf827baa66a9cb889af4376a040066a6d96d37f4bd9"} Dec 09 11:43:59 crc kubenswrapper[4849]: I1209 11:43:59.488817 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" event={"ID":"9b389d0f-7f09-4744-b582-cf09ffe3c937","Type":"ContainerStarted","Data":"b21e04b42e31c30083df01b596e9f71f96d2e38e77af8424eee9899e8f76f573"} Dec 09 11:43:59 crc kubenswrapper[4849]: E1209 11:43:59.494966 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" podUID="9b389d0f-7f09-4744-b582-cf09ffe3c937" Dec 09 11:43:59 crc kubenswrapper[4849]: I1209 11:43:59.497518 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-nn66x" event={"ID":"42cdfefe-0e9c-4ff8-8447-5153ac692a2d","Type":"ContainerStarted","Data":"aec063d51828ab578328624716d64f28f64866720a98bae6cfba6e8597cd4fe7"} Dec 09 11:43:59 crc kubenswrapper[4849]: I1209 11:43:59.536859 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-hr8b8" event={"ID":"f671b0c9-9b37-4150-a41c-7c95a969c149","Type":"ContainerStarted","Data":"fb094c891544be8bdbc4ff2fe2092b29640eca4111803e86ff0b23e1fa12e48f"} Dec 09 11:43:59 crc kubenswrapper[4849]: I1209 11:43:59.539900 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nkjhr" Dec 09 11:43:59 crc kubenswrapper[4849]: E1209 11:43:59.558286 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz" podUID="f51d531d-7b17-44e5-907d-9272df92466f" Dec 09 11:43:59 crc kubenswrapper[4849]: E1209 11:43:59.561526 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" podUID="3cd2993d-bfa4-4aae-b11c-cdc46b9671da" Dec 09 11:43:59 crc kubenswrapper[4849]: E1209 11:43:59.813936 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-4hsgp" podUID="48eb886e-615e-419e-af3a-28e348e24a13" Dec 09 11:44:00 crc kubenswrapper[4849]: E1209 11:44:00.191220 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6xz62" podUID="232105fe-9c4a-438e-bac7-0f13e78fb972" Dec 09 11:44:00 crc kubenswrapper[4849]: E1209 11:44:00.263885 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xnt5q" podUID="577693e5-e4d7-4a4f-be14-41630da8744f" Dec 09 11:44:00 crc kubenswrapper[4849]: E1209 11:44:00.327660 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5wpm6" podUID="53f856a1-0579-4b0a-8294-a2ffb94bf4e5" Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.564831 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xnt5q" event={"ID":"577693e5-e4d7-4a4f-be14-41630da8744f","Type":"ContainerStarted","Data":"964b128cfd6dc3c372a2e9d25c1cc64edf14343c461885d96ef27c71845cfadc"} Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.583726 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-xrq2w" event={"ID":"ab547409-b5b9-41ba-897d-01bd4d233906","Type":"ContainerStarted","Data":"e58d22aab2f53a881ab508491d0573879edf7480a35a8f9fd7e862906f4e8ef4"} Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.584957 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-xrq2w" Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.601853 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5wpm6" event={"ID":"53f856a1-0579-4b0a-8294-a2ffb94bf4e5","Type":"ContainerStarted","Data":"08f998054d8ab471142f57a354a1034bb86f0376b1f9b5b01a623e88f1673d37"} Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.614307 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hmntq" event={"ID":"526627f5-817a-4f47-a28c-cc3597989b1d","Type":"ContainerStarted","Data":"d89cecfce68e66df9cb81a1c7aaed35b1a22d63002ebad219f5168c2765b1c25"} Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.615395 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hmntq" Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.637634 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ch4jh" event={"ID":"93362b58-a33b-4683-ad57-6b72bb7d8655","Type":"ContainerStarted","Data":"ac74cc8f22f366afcf41aa603fc5a30e55c30f74c42aefbdd0b578d5ab56cc49"} Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.641514 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ch4jh" Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.677843 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-xrq2w" Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.683520 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-z694m" event={"ID":"16904597-72e8-41f0-8810-cd75ff6af881","Type":"ContainerStarted","Data":"105438a3ec74493ed07b06fd9a50c77684b0d81a0373da66f79f049c68d74982"} Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.683959 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-z694m" Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.740677 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-hzr9p" event={"ID":"f891f270-493d-463a-9514-127200c5c495","Type":"ContainerStarted","Data":"fe324fbbfccabfd351108f12f65c6cd4f9596385bfc0d6243ebac01824f7bfc8"} Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.743114 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-hzr9p" Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.751724 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-hzr9p" Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.770981 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-xrq2w" podStartSLOduration=4.499869611 podStartE2EDuration="50.77095821s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:13.50788777 +0000 UTC m=+976.047772086" lastFinishedPulling="2025-12-09 11:43:59.778976369 +0000 UTC m=+1022.318860685" observedRunningTime="2025-12-09 11:44:00.720758545 +0000 UTC m=+1023.260642861" watchObservedRunningTime="2025-12-09 11:44:00.77095821 +0000 UTC m=+1023.310842526" Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.788636 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6xz62" event={"ID":"232105fe-9c4a-438e-bac7-0f13e78fb972","Type":"ContainerStarted","Data":"041a77e4fe6a8f88417db7b47801adca5ebfc9dd89352b6a0dc5880fde20c4a5"} Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.822635 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-4hsgp" event={"ID":"48eb886e-615e-419e-af3a-28e348e24a13","Type":"ContainerStarted","Data":"5e97a2329ceae12cde1cc8c720616f77b3ef220ca7e5aced599d60e9bafefae8"} Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.868037 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hmntq" podStartSLOduration=4.166921464 podStartE2EDuration="50.868021457s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:12.794613862 +0000 UTC m=+975.334498178" lastFinishedPulling="2025-12-09 11:43:59.495713855 +0000 UTC m=+1022.035598171" observedRunningTime="2025-12-09 11:44:00.863067665 +0000 UTC m=+1023.402951981" watchObservedRunningTime="2025-12-09 11:44:00.868021457 +0000 UTC m=+1023.407905773" Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.952285 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-hzr9p" podStartSLOduration=4.610637206 podStartE2EDuration="50.952269926s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:13.217825997 +0000 UTC m=+975.757710313" lastFinishedPulling="2025-12-09 11:43:59.559458707 +0000 UTC m=+1022.099343033" observedRunningTime="2025-12-09 11:44:00.947734964 +0000 UTC m=+1023.487619280" watchObservedRunningTime="2025-12-09 11:44:00.952269926 +0000 UTC m=+1023.492154242" Dec 09 11:44:00 crc kubenswrapper[4849]: I1209 11:44:00.997933 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ch4jh" podStartSLOduration=3.858311041 podStartE2EDuration="50.997914638s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:12.359396109 +0000 UTC m=+974.899280425" lastFinishedPulling="2025-12-09 11:43:59.498999706 +0000 UTC m=+1022.038884022" observedRunningTime="2025-12-09 11:44:00.992903894 +0000 UTC m=+1023.532788220" watchObservedRunningTime="2025-12-09 11:44:00.997914638 +0000 UTC m=+1023.537798954" Dec 09 11:44:01 crc kubenswrapper[4849]: I1209 11:44:01.103232 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-z694m" podStartSLOduration=4.359467489 podStartE2EDuration="51.10321544s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:12.75341992 +0000 UTC m=+975.293304236" lastFinishedPulling="2025-12-09 11:43:59.497167871 +0000 UTC m=+1022.037052187" observedRunningTime="2025-12-09 11:44:01.096883553 +0000 UTC m=+1023.636767869" watchObservedRunningTime="2025-12-09 11:44:01.10321544 +0000 UTC m=+1023.643099756" Dec 09 11:44:02 crc kubenswrapper[4849]: I1209 11:44:02.858448 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-s6jnd" event={"ID":"9143dc55-4bce-4cfe-a704-73cf4e65c91f","Type":"ContainerStarted","Data":"10fdabbc0ac2241b97e4e789ba4e8de1f9bcdd7f3fa84d748e73ad0089c4c53d"} Dec 09 11:44:02 crc kubenswrapper[4849]: I1209 11:44:02.858857 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-s6jnd" Dec 09 11:44:02 crc kubenswrapper[4849]: I1209 11:44:02.860979 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-nn66x" event={"ID":"42cdfefe-0e9c-4ff8-8447-5153ac692a2d","Type":"ContainerStarted","Data":"005020049ad9d73ea83454eba630bb18265e9aa003b4932f2c4d748e6cd91ebf"} Dec 09 11:44:02 crc kubenswrapper[4849]: I1209 11:44:02.861849 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-nn66x" Dec 09 11:44:02 crc kubenswrapper[4849]: I1209 11:44:02.864218 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-hr8b8" event={"ID":"f671b0c9-9b37-4150-a41c-7c95a969c149","Type":"ContainerStarted","Data":"1fdb84d418e3fbdda43d5233ac1b23c4a9d10cbbc5c73d3953b196f03e27cca7"} Dec 09 11:44:02 crc kubenswrapper[4849]: I1209 11:44:02.864364 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-hr8b8" Dec 09 11:44:02 crc kubenswrapper[4849]: I1209 11:44:02.881441 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-s6jnd" podStartSLOduration=4.920086071 podStartE2EDuration="52.881393267s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:12.300828946 +0000 UTC m=+974.840713262" lastFinishedPulling="2025-12-09 11:44:00.262136142 +0000 UTC m=+1022.802020458" observedRunningTime="2025-12-09 11:44:02.877796918 +0000 UTC m=+1025.417681244" watchObservedRunningTime="2025-12-09 11:44:02.881393267 +0000 UTC m=+1025.421277593" Dec 09 11:44:02 crc kubenswrapper[4849]: I1209 11:44:02.883237 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-5w7tw" event={"ID":"40bac272-7e22-45e7-841c-7cdd4f87f1ad","Type":"ContainerStarted","Data":"a8d2cc5e9e282c5bf40020cac724eae335dd3cb57678169defaa1e631a3677ce"} Dec 09 11:44:02 crc kubenswrapper[4849]: I1209 11:44:02.967467 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-nn66x" podStartSLOduration=5.606112255 podStartE2EDuration="52.967450641s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:13.124053002 +0000 UTC m=+975.663937318" lastFinishedPulling="2025-12-09 11:44:00.485391388 +0000 UTC m=+1023.025275704" observedRunningTime="2025-12-09 11:44:02.961449272 +0000 UTC m=+1025.501333588" watchObservedRunningTime="2025-12-09 11:44:02.967450641 +0000 UTC m=+1025.507334967" Dec 09 11:44:02 crc kubenswrapper[4849]: I1209 11:44:02.967894 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-hr8b8" podStartSLOduration=5.7304654379999995 podStartE2EDuration="52.967889442s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:12.844675463 +0000 UTC m=+975.384559779" lastFinishedPulling="2025-12-09 11:44:00.082099467 +0000 UTC m=+1022.621983783" observedRunningTime="2025-12-09 11:44:02.916724673 +0000 UTC m=+1025.456608989" watchObservedRunningTime="2025-12-09 11:44:02.967889442 +0000 UTC m=+1025.507773758" Dec 09 11:44:02 crc kubenswrapper[4849]: I1209 11:44:02.996699 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-5w7tw" podStartSLOduration=5.740776175 podStartE2EDuration="52.996676546s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:13.228648896 +0000 UTC m=+975.768533212" lastFinishedPulling="2025-12-09 11:44:00.484549267 +0000 UTC m=+1023.024433583" observedRunningTime="2025-12-09 11:44:02.991888397 +0000 UTC m=+1025.531772713" watchObservedRunningTime="2025-12-09 11:44:02.996676546 +0000 UTC m=+1025.536560872" Dec 09 11:44:03 crc kubenswrapper[4849]: E1209 11:44:03.539898 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn" podUID="3e922935-a9e7-49ab-bd10-f575e0ab0445" Dec 09 11:44:03 crc kubenswrapper[4849]: I1209 11:44:03.737689 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7cfb8477d8-j2tf9" Dec 09 11:44:03 crc kubenswrapper[4849]: I1209 11:44:03.891943 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-5w7tw" Dec 09 11:44:05 crc kubenswrapper[4849]: I1209 11:44:05.945929 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6xz62" event={"ID":"232105fe-9c4a-438e-bac7-0f13e78fb972","Type":"ContainerStarted","Data":"09982112b6b8f75a27f9d2ac033214c2d0bd272ef94f12fd6d039ac36119645d"} Dec 09 11:44:05 crc kubenswrapper[4849]: I1209 11:44:05.946642 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6xz62" Dec 09 11:44:05 crc kubenswrapper[4849]: I1209 11:44:05.949282 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-4hsgp" event={"ID":"48eb886e-615e-419e-af3a-28e348e24a13","Type":"ContainerStarted","Data":"571f4259dcb10a8761c68c35ca2d944673c066fbe2e7064fe4397d49d92dec4d"} Dec 09 11:44:05 crc kubenswrapper[4849]: I1209 11:44:05.949536 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-4hsgp" Dec 09 11:44:05 crc kubenswrapper[4849]: I1209 11:44:05.951771 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xnt5q" event={"ID":"577693e5-e4d7-4a4f-be14-41630da8744f","Type":"ContainerStarted","Data":"397cc23345232b28ec2d4b9575246280f0d2dff3b41281fcf9e32fd53567d8d3"} Dec 09 11:44:05 crc kubenswrapper[4849]: I1209 11:44:05.952588 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xnt5q" Dec 09 11:44:05 crc kubenswrapper[4849]: I1209 11:44:05.958606 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" event={"ID":"ed081101-9961-4cf9-9725-0ec764af322b","Type":"ContainerStarted","Data":"3cc8abafe0f271100efc7e500f434d44be2da1e5dfc718923e241b20642010ac"} Dec 09 11:44:05 crc kubenswrapper[4849]: I1209 11:44:05.958645 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" event={"ID":"ed081101-9961-4cf9-9725-0ec764af322b","Type":"ContainerStarted","Data":"62b2ad8359cfe53121808db85d58d8d631a2bf0397b7194a11887169989242b7"} Dec 09 11:44:05 crc kubenswrapper[4849]: I1209 11:44:05.958759 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:44:05 crc kubenswrapper[4849]: I1209 11:44:05.962856 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" event={"ID":"d2444ef1-caaa-4c1f-b3ac-a503b340bb87","Type":"ContainerStarted","Data":"ae7fe4ef25636327b14935f8ea8ec2992b470b3cbec24ca1f3556c1a1c106b30"} Dec 09 11:44:05 crc kubenswrapper[4849]: I1209 11:44:05.962889 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" event={"ID":"d2444ef1-caaa-4c1f-b3ac-a503b340bb87","Type":"ContainerStarted","Data":"c897484f0a93de7af46e30e63b543a1f6055981e7233fec93d0a8cc0894a3cc3"} Dec 09 11:44:05 crc kubenswrapper[4849]: I1209 11:44:05.963015 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:44:05 crc kubenswrapper[4849]: I1209 11:44:05.965799 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5wpm6" event={"ID":"53f856a1-0579-4b0a-8294-a2ffb94bf4e5","Type":"ContainerStarted","Data":"2581b194313c7bcda12959ee5997dce270f31db6b739a874a636e2f0ebf82c24"} Dec 09 11:44:05 crc kubenswrapper[4849]: I1209 11:44:05.966442 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5wpm6" Dec 09 11:44:06 crc kubenswrapper[4849]: I1209 11:44:06.020631 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6xz62" podStartSLOduration=4.423709331 podStartE2EDuration="56.020609536s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:13.347624356 +0000 UTC m=+975.887508672" lastFinishedPulling="2025-12-09 11:44:04.944524561 +0000 UTC m=+1027.484408877" observedRunningTime="2025-12-09 11:44:06.0163102 +0000 UTC m=+1028.556194536" watchObservedRunningTime="2025-12-09 11:44:06.020609536 +0000 UTC m=+1028.560493862" Dec 09 11:44:06 crc kubenswrapper[4849]: I1209 11:44:06.051060 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5wpm6" podStartSLOduration=4.3853527 podStartE2EDuration="56.051040391s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:13.212751901 +0000 UTC m=+975.752636217" lastFinishedPulling="2025-12-09 11:44:04.878439592 +0000 UTC m=+1027.418323908" observedRunningTime="2025-12-09 11:44:06.044686343 +0000 UTC m=+1028.584570659" watchObservedRunningTime="2025-12-09 11:44:06.051040391 +0000 UTC m=+1028.590924707" Dec 09 11:44:06 crc kubenswrapper[4849]: I1209 11:44:06.061992 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xnt5q" podStartSLOduration=3.94980011 podStartE2EDuration="56.061978153s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:12.833773243 +0000 UTC m=+975.373657559" lastFinishedPulling="2025-12-09 11:44:04.945951286 +0000 UTC m=+1027.485835602" observedRunningTime="2025-12-09 11:44:06.059502161 +0000 UTC m=+1028.599386497" watchObservedRunningTime="2025-12-09 11:44:06.061978153 +0000 UTC m=+1028.601862469" Dec 09 11:44:06 crc kubenswrapper[4849]: I1209 11:44:06.087938 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-4hsgp" podStartSLOduration=4.29101218 podStartE2EDuration="56.087918985s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:13.168981036 +0000 UTC m=+975.708865352" lastFinishedPulling="2025-12-09 11:44:04.965887831 +0000 UTC m=+1027.505772157" observedRunningTime="2025-12-09 11:44:06.087829133 +0000 UTC m=+1028.627713459" watchObservedRunningTime="2025-12-09 11:44:06.087918985 +0000 UTC m=+1028.627803301" Dec 09 11:44:06 crc kubenswrapper[4849]: I1209 11:44:06.176016 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" podStartSLOduration=42.701753281 podStartE2EDuration="56.17600082s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:51.46778908 +0000 UTC m=+1014.007673396" lastFinishedPulling="2025-12-09 11:44:04.942036629 +0000 UTC m=+1027.481920935" observedRunningTime="2025-12-09 11:44:06.173064397 +0000 UTC m=+1028.712948713" watchObservedRunningTime="2025-12-09 11:44:06.17600082 +0000 UTC m=+1028.715885136" Dec 09 11:44:06 crc kubenswrapper[4849]: I1209 11:44:06.176914 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" podStartSLOduration=42.700446219 podStartE2EDuration="56.176908173s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:51.468836686 +0000 UTC m=+1014.008721002" lastFinishedPulling="2025-12-09 11:44:04.94529864 +0000 UTC m=+1027.485182956" observedRunningTime="2025-12-09 11:44:06.122548285 +0000 UTC m=+1028.662432591" watchObservedRunningTime="2025-12-09 11:44:06.176908173 +0000 UTC m=+1028.716792489" Dec 09 11:44:10 crc kubenswrapper[4849]: I1209 11:44:10.464268 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ch4jh" Dec 09 11:44:10 crc kubenswrapper[4849]: I1209 11:44:10.549507 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hmntq" Dec 09 11:44:10 crc kubenswrapper[4849]: I1209 11:44:10.616607 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xnt5q" Dec 09 11:44:10 crc kubenswrapper[4849]: I1209 11:44:10.637169 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-s6jnd" Dec 09 11:44:10 crc kubenswrapper[4849]: I1209 11:44:10.826766 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-z694m" Dec 09 11:44:11 crc kubenswrapper[4849]: I1209 11:44:11.026014 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-hr8b8" Dec 09 11:44:11 crc kubenswrapper[4849]: I1209 11:44:11.141139 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-5w7tw" Dec 09 11:44:11 crc kubenswrapper[4849]: I1209 11:44:11.198380 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5wpm6" Dec 09 11:44:11 crc kubenswrapper[4849]: I1209 11:44:11.278764 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-4hsgp" Dec 09 11:44:11 crc kubenswrapper[4849]: I1209 11:44:11.531891 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-nn66x" Dec 09 11:44:11 crc kubenswrapper[4849]: I1209 11:44:11.570080 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6xz62" Dec 09 11:44:12 crc kubenswrapper[4849]: I1209 11:44:12.472434 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-88dmp" Dec 09 11:44:13 crc kubenswrapper[4849]: I1209 11:44:13.037168 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz" event={"ID":"f51d531d-7b17-44e5-907d-9272df92466f","Type":"ContainerStarted","Data":"ab39e1d4e4db77c7853f0b0d8270cd09ba49541106019282d096ef76a809ac71"} Dec 09 11:44:13 crc kubenswrapper[4849]: I1209 11:44:13.038763 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz" Dec 09 11:44:13 crc kubenswrapper[4849]: I1209 11:44:13.059287 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz" podStartSLOduration=2.990660693 podStartE2EDuration="1m3.059266958s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:12.759395778 +0000 UTC m=+975.299280094" lastFinishedPulling="2025-12-09 11:44:12.828002053 +0000 UTC m=+1035.367886359" observedRunningTime="2025-12-09 11:44:13.056788638 +0000 UTC m=+1035.596672954" watchObservedRunningTime="2025-12-09 11:44:13.059266958 +0000 UTC m=+1035.599151284" Dec 09 11:44:13 crc kubenswrapper[4849]: I1209 11:44:13.253823 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fm55d8" Dec 09 11:44:15 crc kubenswrapper[4849]: I1209 11:44:15.063485 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" event={"ID":"9b389d0f-7f09-4744-b582-cf09ffe3c937","Type":"ContainerStarted","Data":"a14743fcc96a6bf10a4317e5a226df26553d7d29dc5ed28a361040860e7568b1"} Dec 09 11:44:15 crc kubenswrapper[4849]: I1209 11:44:15.064030 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" Dec 09 11:44:15 crc kubenswrapper[4849]: I1209 11:44:15.065551 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" event={"ID":"3cd2993d-bfa4-4aae-b11c-cdc46b9671da","Type":"ContainerStarted","Data":"dae79df449f08f532d1fe467e7ab5c233c3185af950d677a16952e357e090b01"} Dec 09 11:44:15 crc kubenswrapper[4849]: I1209 11:44:15.066140 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" Dec 09 11:44:15 crc kubenswrapper[4849]: I1209 11:44:15.070692 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" event={"ID":"bc26bf04-a33a-4314-a0fa-216360ac6d3b","Type":"ContainerStarted","Data":"2e53b6346c8aec6ca9db06ed14bfb6f23ce3e3c941cb75c5ac70b9783e300eab"} Dec 09 11:44:15 crc kubenswrapper[4849]: I1209 11:44:15.071256 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" Dec 09 11:44:15 crc kubenswrapper[4849]: I1209 11:44:15.098676 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" podStartSLOduration=4.280644163 podStartE2EDuration="1m5.098659624s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:13.654133817 +0000 UTC m=+976.194018133" lastFinishedPulling="2025-12-09 11:44:14.472149278 +0000 UTC m=+1037.012033594" observedRunningTime="2025-12-09 11:44:15.095742362 +0000 UTC m=+1037.635626678" watchObservedRunningTime="2025-12-09 11:44:15.098659624 +0000 UTC m=+1037.638543940" Dec 09 11:44:15 crc kubenswrapper[4849]: I1209 11:44:15.100240 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" podStartSLOduration=4.095151184 podStartE2EDuration="1m5.100230183s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:13.384977852 +0000 UTC m=+975.924862168" lastFinishedPulling="2025-12-09 11:44:14.390056841 +0000 UTC m=+1036.929941167" observedRunningTime="2025-12-09 11:44:15.080709429 +0000 UTC m=+1037.620593745" watchObservedRunningTime="2025-12-09 11:44:15.100230183 +0000 UTC m=+1037.640114499" Dec 09 11:44:16 crc kubenswrapper[4849]: I1209 11:44:16.078369 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn" event={"ID":"3e922935-a9e7-49ab-bd10-f575e0ab0445","Type":"ContainerStarted","Data":"91c1b585138b2674352389ea77d1263e3bad79de1e23211a9c81fffaf010f43d"} Dec 09 11:44:16 crc kubenswrapper[4849]: I1209 11:44:16.094548 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rx9bn" podStartSLOduration=3.174384248 podStartE2EDuration="1m5.09452813s" podCreationTimestamp="2025-12-09 11:43:11 +0000 UTC" firstStartedPulling="2025-12-09 11:43:13.674041541 +0000 UTC m=+976.213925857" lastFinishedPulling="2025-12-09 11:44:15.594185403 +0000 UTC m=+1038.134069739" observedRunningTime="2025-12-09 11:44:16.090690835 +0000 UTC m=+1038.630575151" watchObservedRunningTime="2025-12-09 11:44:16.09452813 +0000 UTC m=+1038.634412446" Dec 09 11:44:16 crc kubenswrapper[4849]: I1209 11:44:16.097170 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" podStartSLOduration=5.376331436 podStartE2EDuration="1m6.097159136s" podCreationTimestamp="2025-12-09 11:43:10 +0000 UTC" firstStartedPulling="2025-12-09 11:43:13.636732806 +0000 UTC m=+976.176617112" lastFinishedPulling="2025-12-09 11:44:14.357560496 +0000 UTC m=+1036.897444812" observedRunningTime="2025-12-09 11:44:15.115719717 +0000 UTC m=+1037.655604033" watchObservedRunningTime="2025-12-09 11:44:16.097159136 +0000 UTC m=+1038.637043462" Dec 09 11:44:20 crc kubenswrapper[4849]: I1209 11:44:20.974866 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-ns9dz" Dec 09 11:44:21 crc kubenswrapper[4849]: I1209 11:44:21.659998 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-26rfq" Dec 09 11:44:21 crc kubenswrapper[4849]: I1209 11:44:21.745308 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gnw95" Dec 09 11:44:21 crc kubenswrapper[4849]: I1209 11:44:21.907334 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-k9vnf" Dec 09 11:44:39 crc kubenswrapper[4849]: I1209 11:44:39.873768 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7q2m9"] Dec 09 11:44:39 crc kubenswrapper[4849]: I1209 11:44:39.875810 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7q2m9" Dec 09 11:44:39 crc kubenswrapper[4849]: I1209 11:44:39.884393 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 09 11:44:39 crc kubenswrapper[4849]: I1209 11:44:39.884586 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-6wwhn" Dec 09 11:44:39 crc kubenswrapper[4849]: I1209 11:44:39.885675 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 09 11:44:39 crc kubenswrapper[4849]: I1209 11:44:39.889208 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 09 11:44:39 crc kubenswrapper[4849]: I1209 11:44:39.903889 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7q2m9"] Dec 09 11:44:39 crc kubenswrapper[4849]: I1209 11:44:39.969988 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9lcg8"] Dec 09 11:44:39 crc kubenswrapper[4849]: I1209 11:44:39.971176 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" Dec 09 11:44:39 crc kubenswrapper[4849]: I1209 11:44:39.976826 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.005640 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9lcg8"] Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.025386 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bscxq\" (UniqueName: \"kubernetes.io/projected/0d5fa528-442d-4bd3-9f50-244203377ad8-kube-api-access-bscxq\") pod \"dnsmasq-dns-675f4bcbfc-7q2m9\" (UID: \"0d5fa528-442d-4bd3-9f50-244203377ad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7q2m9" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.025472 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d5fa528-442d-4bd3-9f50-244203377ad8-config\") pod \"dnsmasq-dns-675f4bcbfc-7q2m9\" (UID: \"0d5fa528-442d-4bd3-9f50-244203377ad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7q2m9" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.126337 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9lcg8\" (UID: \"28db2c1e-d9fa-44a8-be16-425e0dd72ba1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.126386 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-config\") pod \"dnsmasq-dns-78dd6ddcc-9lcg8\" (UID: \"28db2c1e-d9fa-44a8-be16-425e0dd72ba1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.126459 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bscxq\" (UniqueName: \"kubernetes.io/projected/0d5fa528-442d-4bd3-9f50-244203377ad8-kube-api-access-bscxq\") pod \"dnsmasq-dns-675f4bcbfc-7q2m9\" (UID: \"0d5fa528-442d-4bd3-9f50-244203377ad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7q2m9" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.126647 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkdnv\" (UniqueName: \"kubernetes.io/projected/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-kube-api-access-qkdnv\") pod \"dnsmasq-dns-78dd6ddcc-9lcg8\" (UID: \"28db2c1e-d9fa-44a8-be16-425e0dd72ba1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.126703 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d5fa528-442d-4bd3-9f50-244203377ad8-config\") pod \"dnsmasq-dns-675f4bcbfc-7q2m9\" (UID: \"0d5fa528-442d-4bd3-9f50-244203377ad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7q2m9" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.127553 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d5fa528-442d-4bd3-9f50-244203377ad8-config\") pod \"dnsmasq-dns-675f4bcbfc-7q2m9\" (UID: \"0d5fa528-442d-4bd3-9f50-244203377ad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7q2m9" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.156498 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bscxq\" (UniqueName: \"kubernetes.io/projected/0d5fa528-442d-4bd3-9f50-244203377ad8-kube-api-access-bscxq\") pod \"dnsmasq-dns-675f4bcbfc-7q2m9\" (UID: \"0d5fa528-442d-4bd3-9f50-244203377ad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7q2m9" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.192287 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7q2m9" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.227877 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkdnv\" (UniqueName: \"kubernetes.io/projected/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-kube-api-access-qkdnv\") pod \"dnsmasq-dns-78dd6ddcc-9lcg8\" (UID: \"28db2c1e-d9fa-44a8-be16-425e0dd72ba1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.228239 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9lcg8\" (UID: \"28db2c1e-d9fa-44a8-be16-425e0dd72ba1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.228269 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-config\") pod \"dnsmasq-dns-78dd6ddcc-9lcg8\" (UID: \"28db2c1e-d9fa-44a8-be16-425e0dd72ba1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.229904 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9lcg8\" (UID: \"28db2c1e-d9fa-44a8-be16-425e0dd72ba1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.230707 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-config\") pod \"dnsmasq-dns-78dd6ddcc-9lcg8\" (UID: \"28db2c1e-d9fa-44a8-be16-425e0dd72ba1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.256190 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkdnv\" (UniqueName: \"kubernetes.io/projected/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-kube-api-access-qkdnv\") pod \"dnsmasq-dns-78dd6ddcc-9lcg8\" (UID: \"28db2c1e-d9fa-44a8-be16-425e0dd72ba1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.293928 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.509558 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7q2m9"] Dec 09 11:44:40 crc kubenswrapper[4849]: I1209 11:44:40.568222 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9lcg8"] Dec 09 11:44:40 crc kubenswrapper[4849]: W1209 11:44:40.573254 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28db2c1e_d9fa_44a8_be16_425e0dd72ba1.slice/crio-856e0a9d275d9751852fc2129a2228d6c5cefcf80d5eae00d11d4e90f94c07e8 WatchSource:0}: Error finding container 856e0a9d275d9751852fc2129a2228d6c5cefcf80d5eae00d11d4e90f94c07e8: Status 404 returned error can't find the container with id 856e0a9d275d9751852fc2129a2228d6c5cefcf80d5eae00d11d4e90f94c07e8 Dec 09 11:44:41 crc kubenswrapper[4849]: I1209 11:44:41.289941 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7q2m9" event={"ID":"0d5fa528-442d-4bd3-9f50-244203377ad8","Type":"ContainerStarted","Data":"fa57833e27aeb7b7874748ae6a42eb1d87bb777cdb18ea62d992e274ea48321b"} Dec 09 11:44:41 crc kubenswrapper[4849]: I1209 11:44:41.290768 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" event={"ID":"28db2c1e-d9fa-44a8-be16-425e0dd72ba1","Type":"ContainerStarted","Data":"856e0a9d275d9751852fc2129a2228d6c5cefcf80d5eae00d11d4e90f94c07e8"} Dec 09 11:44:42 crc kubenswrapper[4849]: I1209 11:44:42.615696 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7q2m9"] Dec 09 11:44:42 crc kubenswrapper[4849]: I1209 11:44:42.652483 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-94rpx"] Dec 09 11:44:42 crc kubenswrapper[4849]: I1209 11:44:42.653655 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-94rpx" Dec 09 11:44:42 crc kubenswrapper[4849]: I1209 11:44:42.674395 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-94rpx"] Dec 09 11:44:42 crc kubenswrapper[4849]: I1209 11:44:42.680757 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede5a785-c680-4bc2-9e42-a7e0edaf7028-dns-svc\") pod \"dnsmasq-dns-666b6646f7-94rpx\" (UID: \"ede5a785-c680-4bc2-9e42-a7e0edaf7028\") " pod="openstack/dnsmasq-dns-666b6646f7-94rpx" Dec 09 11:44:42 crc kubenswrapper[4849]: I1209 11:44:42.680880 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede5a785-c680-4bc2-9e42-a7e0edaf7028-config\") pod \"dnsmasq-dns-666b6646f7-94rpx\" (UID: \"ede5a785-c680-4bc2-9e42-a7e0edaf7028\") " pod="openstack/dnsmasq-dns-666b6646f7-94rpx" Dec 09 11:44:42 crc kubenswrapper[4849]: I1209 11:44:42.680931 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtd7d\" (UniqueName: \"kubernetes.io/projected/ede5a785-c680-4bc2-9e42-a7e0edaf7028-kube-api-access-dtd7d\") pod \"dnsmasq-dns-666b6646f7-94rpx\" (UID: \"ede5a785-c680-4bc2-9e42-a7e0edaf7028\") " pod="openstack/dnsmasq-dns-666b6646f7-94rpx" Dec 09 11:44:42 crc kubenswrapper[4849]: I1209 11:44:42.783029 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede5a785-c680-4bc2-9e42-a7e0edaf7028-config\") pod \"dnsmasq-dns-666b6646f7-94rpx\" (UID: \"ede5a785-c680-4bc2-9e42-a7e0edaf7028\") " pod="openstack/dnsmasq-dns-666b6646f7-94rpx" Dec 09 11:44:42 crc kubenswrapper[4849]: I1209 11:44:42.783111 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtd7d\" (UniqueName: \"kubernetes.io/projected/ede5a785-c680-4bc2-9e42-a7e0edaf7028-kube-api-access-dtd7d\") pod \"dnsmasq-dns-666b6646f7-94rpx\" (UID: \"ede5a785-c680-4bc2-9e42-a7e0edaf7028\") " pod="openstack/dnsmasq-dns-666b6646f7-94rpx" Dec 09 11:44:42 crc kubenswrapper[4849]: I1209 11:44:42.783161 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede5a785-c680-4bc2-9e42-a7e0edaf7028-dns-svc\") pod \"dnsmasq-dns-666b6646f7-94rpx\" (UID: \"ede5a785-c680-4bc2-9e42-a7e0edaf7028\") " pod="openstack/dnsmasq-dns-666b6646f7-94rpx" Dec 09 11:44:42 crc kubenswrapper[4849]: I1209 11:44:42.784279 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede5a785-c680-4bc2-9e42-a7e0edaf7028-dns-svc\") pod \"dnsmasq-dns-666b6646f7-94rpx\" (UID: \"ede5a785-c680-4bc2-9e42-a7e0edaf7028\") " pod="openstack/dnsmasq-dns-666b6646f7-94rpx" Dec 09 11:44:42 crc kubenswrapper[4849]: I1209 11:44:42.784944 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede5a785-c680-4bc2-9e42-a7e0edaf7028-config\") pod \"dnsmasq-dns-666b6646f7-94rpx\" (UID: \"ede5a785-c680-4bc2-9e42-a7e0edaf7028\") " pod="openstack/dnsmasq-dns-666b6646f7-94rpx" Dec 09 11:44:42 crc kubenswrapper[4849]: I1209 11:44:42.858559 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtd7d\" (UniqueName: \"kubernetes.io/projected/ede5a785-c680-4bc2-9e42-a7e0edaf7028-kube-api-access-dtd7d\") pod \"dnsmasq-dns-666b6646f7-94rpx\" (UID: \"ede5a785-c680-4bc2-9e42-a7e0edaf7028\") " pod="openstack/dnsmasq-dns-666b6646f7-94rpx" Dec 09 11:44:42 crc kubenswrapper[4849]: I1209 11:44:42.986601 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-94rpx" Dec 09 11:44:43 crc kubenswrapper[4849]: I1209 11:44:43.623632 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9lcg8"] Dec 09 11:44:43 crc kubenswrapper[4849]: I1209 11:44:43.672133 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ghwxh"] Dec 09 11:44:43 crc kubenswrapper[4849]: I1209 11:44:43.673742 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" Dec 09 11:44:43 crc kubenswrapper[4849]: I1209 11:44:43.713283 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a764cc5b-6d18-4193-8f44-2b0224d368e7-config\") pod \"dnsmasq-dns-57d769cc4f-ghwxh\" (UID: \"a764cc5b-6d18-4193-8f44-2b0224d368e7\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" Dec 09 11:44:43 crc kubenswrapper[4849]: I1209 11:44:43.713660 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a764cc5b-6d18-4193-8f44-2b0224d368e7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ghwxh\" (UID: \"a764cc5b-6d18-4193-8f44-2b0224d368e7\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" Dec 09 11:44:43 crc kubenswrapper[4849]: I1209 11:44:43.713875 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlr9l\" (UniqueName: \"kubernetes.io/projected/a764cc5b-6d18-4193-8f44-2b0224d368e7-kube-api-access-xlr9l\") pod \"dnsmasq-dns-57d769cc4f-ghwxh\" (UID: \"a764cc5b-6d18-4193-8f44-2b0224d368e7\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" Dec 09 11:44:43 crc kubenswrapper[4849]: I1209 11:44:43.724809 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-94rpx"] Dec 09 11:44:43 crc kubenswrapper[4849]: I1209 11:44:43.757933 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ghwxh"] Dec 09 11:44:43 crc kubenswrapper[4849]: I1209 11:44:43.815795 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a764cc5b-6d18-4193-8f44-2b0224d368e7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ghwxh\" (UID: \"a764cc5b-6d18-4193-8f44-2b0224d368e7\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" Dec 09 11:44:43 crc kubenswrapper[4849]: I1209 11:44:43.816191 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlr9l\" (UniqueName: \"kubernetes.io/projected/a764cc5b-6d18-4193-8f44-2b0224d368e7-kube-api-access-xlr9l\") pod \"dnsmasq-dns-57d769cc4f-ghwxh\" (UID: \"a764cc5b-6d18-4193-8f44-2b0224d368e7\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" Dec 09 11:44:43 crc kubenswrapper[4849]: I1209 11:44:43.816236 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a764cc5b-6d18-4193-8f44-2b0224d368e7-config\") pod \"dnsmasq-dns-57d769cc4f-ghwxh\" (UID: \"a764cc5b-6d18-4193-8f44-2b0224d368e7\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" Dec 09 11:44:43 crc kubenswrapper[4849]: I1209 11:44:43.817271 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a764cc5b-6d18-4193-8f44-2b0224d368e7-config\") pod \"dnsmasq-dns-57d769cc4f-ghwxh\" (UID: \"a764cc5b-6d18-4193-8f44-2b0224d368e7\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" Dec 09 11:44:43 crc kubenswrapper[4849]: I1209 11:44:43.817446 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a764cc5b-6d18-4193-8f44-2b0224d368e7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ghwxh\" (UID: \"a764cc5b-6d18-4193-8f44-2b0224d368e7\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" Dec 09 11:44:43 crc kubenswrapper[4849]: I1209 11:44:43.865083 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlr9l\" (UniqueName: \"kubernetes.io/projected/a764cc5b-6d18-4193-8f44-2b0224d368e7-kube-api-access-xlr9l\") pod \"dnsmasq-dns-57d769cc4f-ghwxh\" (UID: \"a764cc5b-6d18-4193-8f44-2b0224d368e7\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.016457 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.332257 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-94rpx" event={"ID":"ede5a785-c680-4bc2-9e42-a7e0edaf7028","Type":"ContainerStarted","Data":"1b6dcfd91a644188cd5cedb51001070357c274525f3ea2c839a3b319f4ca534c"} Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.381664 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.395989 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.396093 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.402709 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.402999 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.403196 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5bghx" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.403372 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.403725 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.403932 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.408645 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.532402 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz9rl\" (UniqueName: \"kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-kube-api-access-cz9rl\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.532489 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.532524 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-config-data\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.532579 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86df3233-1d99-4023-9ff7-55bab063bd7e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.532605 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.532736 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.533007 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.533054 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.533103 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.533204 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.533230 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86df3233-1d99-4023-9ff7-55bab063bd7e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.593006 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ghwxh"] Dec 09 11:44:44 crc kubenswrapper[4849]: W1209 11:44:44.602126 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda764cc5b_6d18_4193_8f44_2b0224d368e7.slice/crio-457f8026e8965523a660e7b8e0304d6a3e0b74d82040919c977f4ff7f80529bf WatchSource:0}: Error finding container 457f8026e8965523a660e7b8e0304d6a3e0b74d82040919c977f4ff7f80529bf: Status 404 returned error can't find the container with id 457f8026e8965523a660e7b8e0304d6a3e0b74d82040919c977f4ff7f80529bf Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.634713 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.634769 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.634794 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.634851 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.634873 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86df3233-1d99-4023-9ff7-55bab063bd7e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.634900 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz9rl\" (UniqueName: \"kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-kube-api-access-cz9rl\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.635556 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.635595 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-config-data\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.635697 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.635719 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.635776 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86df3233-1d99-4023-9ff7-55bab063bd7e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.635816 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.637605 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-config-data\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.637795 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.637880 4849 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.638075 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.641246 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.651601 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86df3233-1d99-4023-9ff7-55bab063bd7e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.651982 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.654093 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86df3233-1d99-4023-9ff7-55bab063bd7e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.658817 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz9rl\" (UniqueName: \"kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-kube-api-access-cz9rl\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.673078 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.682040 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.742600 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.849557 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.862614 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.866763 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.867004 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.867158 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.867317 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.870059 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.870280 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4jdlp" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.872994 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.876053 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.950227 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.950288 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.950336 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e5432d8-b092-46cd-8aab-cb194ebb23f7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.950385 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e5432d8-b092-46cd-8aab-cb194ebb23f7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.950459 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.950481 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.950500 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.950520 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.950547 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8h4n\" (UniqueName: \"kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-kube-api-access-c8h4n\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.950568 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:44 crc kubenswrapper[4849]: I1209 11:44:44.950610 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.052216 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e5432d8-b092-46cd-8aab-cb194ebb23f7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.052589 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.052618 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.052642 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.052663 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.052691 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8h4n\" (UniqueName: \"kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-kube-api-access-c8h4n\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.052718 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.052774 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.052813 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.052839 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.052871 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e5432d8-b092-46cd-8aab-cb194ebb23f7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.054382 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.054402 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.055482 4849 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.056116 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.056575 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.056748 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.060935 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.075503 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8h4n\" (UniqueName: \"kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-kube-api-access-c8h4n\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.078511 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e5432d8-b092-46cd-8aab-cb194ebb23f7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.079658 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.080163 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e5432d8-b092-46cd-8aab-cb194ebb23f7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.113978 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.237076 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.386913 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" event={"ID":"a764cc5b-6d18-4193-8f44-2b0224d368e7","Type":"ContainerStarted","Data":"457f8026e8965523a660e7b8e0304d6a3e0b74d82040919c977f4ff7f80529bf"} Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.460230 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 11:44:45 crc kubenswrapper[4849]: W1209 11:44:45.479271 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86df3233_1d99_4023_9ff7_55bab063bd7e.slice/crio-35bdc790c9160fc885e9322eaf35192968b1ddcb96c810b9d83065ad8d76474d WatchSource:0}: Error finding container 35bdc790c9160fc885e9322eaf35192968b1ddcb96c810b9d83065ad8d76474d: Status 404 returned error can't find the container with id 35bdc790c9160fc885e9322eaf35192968b1ddcb96c810b9d83065ad8d76474d Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.837915 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.936263 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.938097 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.943026 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.943205 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.943367 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-c26hd" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.943560 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.951436 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 09 11:44:45 crc kubenswrapper[4849]: I1209 11:44:45.965534 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.095560 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.095610 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-config-data-default\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.095658 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.095702 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.095732 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.095756 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.095806 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-kolla-config\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.095823 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77rvj\" (UniqueName: \"kubernetes.io/projected/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-kube-api-access-77rvj\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.197120 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.197181 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.197205 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.197226 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-kolla-config\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.197239 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77rvj\" (UniqueName: \"kubernetes.io/projected/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-kube-api-access-77rvj\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.197278 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.197296 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-config-data-default\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.197340 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.198636 4849 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.201188 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-config-data-default\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.211286 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.212653 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.212748 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.213000 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-kolla-config\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.235370 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.241294 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.268020 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77rvj\" (UniqueName: \"kubernetes.io/projected/f78d8a52-1a90-4413-acb9-3925dfa4f1f0-kube-api-access-77rvj\") pod \"openstack-galera-0\" (UID: \"f78d8a52-1a90-4413-acb9-3925dfa4f1f0\") " pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.270631 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.413730 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86df3233-1d99-4023-9ff7-55bab063bd7e","Type":"ContainerStarted","Data":"35bdc790c9160fc885e9322eaf35192968b1ddcb96c810b9d83065ad8d76474d"} Dec 09 11:44:46 crc kubenswrapper[4849]: I1209 11:44:46.415443 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e5432d8-b092-46cd-8aab-cb194ebb23f7","Type":"ContainerStarted","Data":"dd973ce95c9b556a1379c4bdc8887e0cea0e270733ba531cf5957d98663c5b56"} Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.020605 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.022378 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.025256 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-7mptk" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.025457 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.025676 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.029319 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.059165 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.064448 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.120661 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/574c9a8a-6aaf-4344-b566-039bf65b788d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.120697 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574c9a8a-6aaf-4344-b566-039bf65b788d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.120745 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/574c9a8a-6aaf-4344-b566-039bf65b788d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.120764 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574c9a8a-6aaf-4344-b566-039bf65b788d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.120818 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/574c9a8a-6aaf-4344-b566-039bf65b788d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.120841 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/574c9a8a-6aaf-4344-b566-039bf65b788d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.120916 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.120933 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkr97\" (UniqueName: \"kubernetes.io/projected/574c9a8a-6aaf-4344-b566-039bf65b788d-kube-api-access-fkr97\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.221928 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.221967 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkr97\" (UniqueName: \"kubernetes.io/projected/574c9a8a-6aaf-4344-b566-039bf65b788d-kube-api-access-fkr97\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.221997 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/574c9a8a-6aaf-4344-b566-039bf65b788d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.222013 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574c9a8a-6aaf-4344-b566-039bf65b788d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.222058 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/574c9a8a-6aaf-4344-b566-039bf65b788d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.222080 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574c9a8a-6aaf-4344-b566-039bf65b788d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.222137 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/574c9a8a-6aaf-4344-b566-039bf65b788d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.222161 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/574c9a8a-6aaf-4344-b566-039bf65b788d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.223712 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/574c9a8a-6aaf-4344-b566-039bf65b788d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.224821 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/574c9a8a-6aaf-4344-b566-039bf65b788d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.225567 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574c9a8a-6aaf-4344-b566-039bf65b788d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.226130 4849 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.228802 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/574c9a8a-6aaf-4344-b566-039bf65b788d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.258874 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574c9a8a-6aaf-4344-b566-039bf65b788d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.285219 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkr97\" (UniqueName: \"kubernetes.io/projected/574c9a8a-6aaf-4344-b566-039bf65b788d-kube-api-access-fkr97\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.285342 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.310309 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/574c9a8a-6aaf-4344-b566-039bf65b788d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"574c9a8a-6aaf-4344-b566-039bf65b788d\") " pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.351019 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.446517 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.447872 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.452844 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.452890 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-q27xb" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.453009 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.468457 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.515271 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f78d8a52-1a90-4413-acb9-3925dfa4f1f0","Type":"ContainerStarted","Data":"a66ad808d63c4353c6e9077595251b85f8c4d1bcb913b89b952199601cbdb5d2"} Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.539458 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce\") " pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.539513 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce-config-data\") pod \"memcached-0\" (UID: \"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce\") " pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.539573 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce-kolla-config\") pod \"memcached-0\" (UID: \"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce\") " pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.539641 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce\") " pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.539677 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbf2w\" (UniqueName: \"kubernetes.io/projected/d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce-kube-api-access-gbf2w\") pod \"memcached-0\" (UID: \"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce\") " pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.642992 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce-config-data\") pod \"memcached-0\" (UID: \"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce\") " pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.643468 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce\") " pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.643532 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce-kolla-config\") pod \"memcached-0\" (UID: \"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce\") " pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.643622 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce\") " pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.643656 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbf2w\" (UniqueName: \"kubernetes.io/projected/d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce-kube-api-access-gbf2w\") pod \"memcached-0\" (UID: \"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce\") " pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.645871 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce-config-data\") pod \"memcached-0\" (UID: \"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce\") " pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.648010 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce-kolla-config\") pod \"memcached-0\" (UID: \"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce\") " pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.714601 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce\") " pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.714693 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce\") " pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.715869 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbf2w\" (UniqueName: \"kubernetes.io/projected/d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce-kube-api-access-gbf2w\") pod \"memcached-0\" (UID: \"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce\") " pod="openstack/memcached-0" Dec 09 11:44:47 crc kubenswrapper[4849]: I1209 11:44:47.804935 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 09 11:44:48 crc kubenswrapper[4849]: I1209 11:44:48.130349 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 11:44:48 crc kubenswrapper[4849]: I1209 11:44:48.462684 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 09 11:44:48 crc kubenswrapper[4849]: W1209 11:44:48.506296 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9617b32_ad2b_4bd3_a0d1_5ca6af5569ce.slice/crio-428e9ffbef948eb6e4c32a8e8148dfafae9709757fcf509efdacd0efde575624 WatchSource:0}: Error finding container 428e9ffbef948eb6e4c32a8e8148dfafae9709757fcf509efdacd0efde575624: Status 404 returned error can't find the container with id 428e9ffbef948eb6e4c32a8e8148dfafae9709757fcf509efdacd0efde575624 Dec 09 11:44:48 crc kubenswrapper[4849]: I1209 11:44:48.588357 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"574c9a8a-6aaf-4344-b566-039bf65b788d","Type":"ContainerStarted","Data":"51796cb1564a7f657df586f67cd78e9466f383ef06e1ee14bdea3f8f3043be1b"} Dec 09 11:44:49 crc kubenswrapper[4849]: I1209 11:44:49.224395 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 11:44:49 crc kubenswrapper[4849]: I1209 11:44:49.225552 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 11:44:49 crc kubenswrapper[4849]: I1209 11:44:49.235911 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 11:44:49 crc kubenswrapper[4849]: I1209 11:44:49.236291 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-fmq8l" Dec 09 11:44:49 crc kubenswrapper[4849]: I1209 11:44:49.300523 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8s5s\" (UniqueName: \"kubernetes.io/projected/31f3ac0d-dbb7-4371-8718-ddfafd5481f7-kube-api-access-m8s5s\") pod \"kube-state-metrics-0\" (UID: \"31f3ac0d-dbb7-4371-8718-ddfafd5481f7\") " pod="openstack/kube-state-metrics-0" Dec 09 11:44:49 crc kubenswrapper[4849]: I1209 11:44:49.403273 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8s5s\" (UniqueName: \"kubernetes.io/projected/31f3ac0d-dbb7-4371-8718-ddfafd5481f7-kube-api-access-m8s5s\") pod \"kube-state-metrics-0\" (UID: \"31f3ac0d-dbb7-4371-8718-ddfafd5481f7\") " pod="openstack/kube-state-metrics-0" Dec 09 11:44:49 crc kubenswrapper[4849]: I1209 11:44:49.423333 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8s5s\" (UniqueName: \"kubernetes.io/projected/31f3ac0d-dbb7-4371-8718-ddfafd5481f7-kube-api-access-m8s5s\") pod \"kube-state-metrics-0\" (UID: \"31f3ac0d-dbb7-4371-8718-ddfafd5481f7\") " pod="openstack/kube-state-metrics-0" Dec 09 11:44:49 crc kubenswrapper[4849]: I1209 11:44:49.568481 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 11:44:49 crc kubenswrapper[4849]: I1209 11:44:49.603828 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce","Type":"ContainerStarted","Data":"428e9ffbef948eb6e4c32a8e8148dfafae9709757fcf509efdacd0efde575624"} Dec 09 11:44:50 crc kubenswrapper[4849]: I1209 11:44:50.558207 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 11:44:50 crc kubenswrapper[4849]: I1209 11:44:50.654901 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"31f3ac0d-dbb7-4371-8718-ddfafd5481f7","Type":"ContainerStarted","Data":"7afb2718c867c1fbea74164930b94b19ec4b5c3035be67f61cc41a32b2dc728e"} Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.132821 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.132884 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.866238 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-czrmh"] Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.868324 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-czrmh" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.877199 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.877432 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.877655 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-25w4t" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.927593 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-czrmh"] Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.939438 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-chw84"] Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.941337 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.962857 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69a39d69-d705-4246-bc77-cbdd3fadfefa-var-run-ovn\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.962903 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a39d69-d705-4246-bc77-cbdd3fadfefa-ovn-controller-tls-certs\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.962938 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69a39d69-d705-4246-bc77-cbdd3fadfefa-scripts\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.962957 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c2tk\" (UniqueName: \"kubernetes.io/projected/69a39d69-d705-4246-bc77-cbdd3fadfefa-kube-api-access-2c2tk\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.962976 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/47f40834-5de4-472b-a069-579d98cff69e-var-lib\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.963003 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69a39d69-d705-4246-bc77-cbdd3fadfefa-var-log-ovn\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.963020 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69a39d69-d705-4246-bc77-cbdd3fadfefa-var-run\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.963057 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a39d69-d705-4246-bc77-cbdd3fadfefa-combined-ca-bundle\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.963074 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdp7s\" (UniqueName: \"kubernetes.io/projected/47f40834-5de4-472b-a069-579d98cff69e-kube-api-access-hdp7s\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.963099 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47f40834-5de4-472b-a069-579d98cff69e-var-run\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.963114 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/47f40834-5de4-472b-a069-579d98cff69e-var-log\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.964878 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/47f40834-5de4-472b-a069-579d98cff69e-etc-ovs\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.964912 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47f40834-5de4-472b-a069-579d98cff69e-scripts\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:51 crc kubenswrapper[4849]: I1209 11:44:51.971214 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-chw84"] Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.072285 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c2tk\" (UniqueName: \"kubernetes.io/projected/69a39d69-d705-4246-bc77-cbdd3fadfefa-kube-api-access-2c2tk\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.072340 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/47f40834-5de4-472b-a069-579d98cff69e-var-lib\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.072380 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69a39d69-d705-4246-bc77-cbdd3fadfefa-var-log-ovn\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.072401 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69a39d69-d705-4246-bc77-cbdd3fadfefa-var-run\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.072461 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a39d69-d705-4246-bc77-cbdd3fadfefa-combined-ca-bundle\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.072477 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdp7s\" (UniqueName: \"kubernetes.io/projected/47f40834-5de4-472b-a069-579d98cff69e-kube-api-access-hdp7s\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.072504 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47f40834-5de4-472b-a069-579d98cff69e-var-run\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.072518 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/47f40834-5de4-472b-a069-579d98cff69e-var-log\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.072553 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/47f40834-5de4-472b-a069-579d98cff69e-etc-ovs\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.072573 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47f40834-5de4-472b-a069-579d98cff69e-scripts\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.072596 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69a39d69-d705-4246-bc77-cbdd3fadfefa-var-run-ovn\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.072621 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a39d69-d705-4246-bc77-cbdd3fadfefa-ovn-controller-tls-certs\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.072648 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69a39d69-d705-4246-bc77-cbdd3fadfefa-scripts\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.072975 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/47f40834-5de4-472b-a069-579d98cff69e-var-lib\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.073120 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69a39d69-d705-4246-bc77-cbdd3fadfefa-var-run\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.073235 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69a39d69-d705-4246-bc77-cbdd3fadfefa-var-log-ovn\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.073580 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47f40834-5de4-472b-a069-579d98cff69e-var-run\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.073997 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/47f40834-5de4-472b-a069-579d98cff69e-var-log\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.074201 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69a39d69-d705-4246-bc77-cbdd3fadfefa-var-run-ovn\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.074554 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/47f40834-5de4-472b-a069-579d98cff69e-etc-ovs\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.076064 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47f40834-5de4-472b-a069-579d98cff69e-scripts\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.080220 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69a39d69-d705-4246-bc77-cbdd3fadfefa-scripts\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.089211 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a39d69-d705-4246-bc77-cbdd3fadfefa-ovn-controller-tls-certs\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.107074 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a39d69-d705-4246-bc77-cbdd3fadfefa-combined-ca-bundle\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.108282 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdp7s\" (UniqueName: \"kubernetes.io/projected/47f40834-5de4-472b-a069-579d98cff69e-kube-api-access-hdp7s\") pod \"ovn-controller-ovs-chw84\" (UID: \"47f40834-5de4-472b-a069-579d98cff69e\") " pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.114321 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c2tk\" (UniqueName: \"kubernetes.io/projected/69a39d69-d705-4246-bc77-cbdd3fadfefa-kube-api-access-2c2tk\") pod \"ovn-controller-czrmh\" (UID: \"69a39d69-d705-4246-bc77-cbdd3fadfefa\") " pod="openstack/ovn-controller-czrmh" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.231445 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-czrmh" Dec 09 11:44:52 crc kubenswrapper[4849]: I1209 11:44:52.291035 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:44:53 crc kubenswrapper[4849]: I1209 11:44:53.385805 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-czrmh"] Dec 09 11:44:53 crc kubenswrapper[4849]: W1209 11:44:53.460182 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69a39d69_d705_4246_bc77_cbdd3fadfefa.slice/crio-4366336545f9c1078cabbe5a75d75eb3c6ba509ee3d746b202b23bbde3cca49b WatchSource:0}: Error finding container 4366336545f9c1078cabbe5a75d75eb3c6ba509ee3d746b202b23bbde3cca49b: Status 404 returned error can't find the container with id 4366336545f9c1078cabbe5a75d75eb3c6ba509ee3d746b202b23bbde3cca49b Dec 09 11:44:53 crc kubenswrapper[4849]: I1209 11:44:53.722284 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-czrmh" event={"ID":"69a39d69-d705-4246-bc77-cbdd3fadfefa","Type":"ContainerStarted","Data":"4366336545f9c1078cabbe5a75d75eb3c6ba509ee3d746b202b23bbde3cca49b"} Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.059648 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.069675 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.069772 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.073139 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-g2tnk" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.073648 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.073823 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.073957 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.074037 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.215024 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbb8ec61-588d-43ff-8597-eddb7a747106-config\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.215093 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb8ec61-588d-43ff-8597-eddb7a747106-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.215127 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb8ec61-588d-43ff-8597-eddb7a747106-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.215157 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqv6x\" (UniqueName: \"kubernetes.io/projected/bbb8ec61-588d-43ff-8597-eddb7a747106-kube-api-access-vqv6x\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.215175 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbb8ec61-588d-43ff-8597-eddb7a747106-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.215377 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.215495 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb8ec61-588d-43ff-8597-eddb7a747106-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.215539 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bbb8ec61-588d-43ff-8597-eddb7a747106-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.317060 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.317165 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb8ec61-588d-43ff-8597-eddb7a747106-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.317190 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bbb8ec61-588d-43ff-8597-eddb7a747106-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.317571 4849 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.319634 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bbb8ec61-588d-43ff-8597-eddb7a747106-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.319809 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbb8ec61-588d-43ff-8597-eddb7a747106-config\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.319849 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb8ec61-588d-43ff-8597-eddb7a747106-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.320391 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb8ec61-588d-43ff-8597-eddb7a747106-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.320602 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqv6x\" (UniqueName: \"kubernetes.io/projected/bbb8ec61-588d-43ff-8597-eddb7a747106-kube-api-access-vqv6x\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.320634 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbb8ec61-588d-43ff-8597-eddb7a747106-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.321815 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbb8ec61-588d-43ff-8597-eddb7a747106-config\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.322032 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbb8ec61-588d-43ff-8597-eddb7a747106-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.325691 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb8ec61-588d-43ff-8597-eddb7a747106-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.347331 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb8ec61-588d-43ff-8597-eddb7a747106-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.347374 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-chw84"] Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.348109 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqv6x\" (UniqueName: \"kubernetes.io/projected/bbb8ec61-588d-43ff-8597-eddb7a747106-kube-api-access-vqv6x\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.359498 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb8ec61-588d-43ff-8597-eddb7a747106-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.366734 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bbb8ec61-588d-43ff-8597-eddb7a747106\") " pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:54 crc kubenswrapper[4849]: I1209 11:44:54.404229 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.654059 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-s5gch"] Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.655563 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.662750 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.663320 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-s5gch"] Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.855469 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.855567 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-ovs-rundir\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.855610 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzrs9\" (UniqueName: \"kubernetes.io/projected/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-kube-api-access-wzrs9\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.855722 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-config\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.855748 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-combined-ca-bundle\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.855811 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-ovn-rundir\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.957294 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-ovn-rundir\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.957439 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.957474 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-ovs-rundir\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.957574 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzrs9\" (UniqueName: \"kubernetes.io/projected/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-kube-api-access-wzrs9\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.957641 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-config\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.957667 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-combined-ca-bundle\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.958988 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-ovn-rundir\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.960664 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-ovs-rundir\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.960831 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-config\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.965911 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-combined-ca-bundle\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.974176 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.979083 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzrs9\" (UniqueName: \"kubernetes.io/projected/edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c-kube-api-access-wzrs9\") pod \"ovn-controller-metrics-s5gch\" (UID: \"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c\") " pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:55 crc kubenswrapper[4849]: I1209 11:44:55.984147 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-s5gch" Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.755242 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-chw84" event={"ID":"47f40834-5de4-472b-a069-579d98cff69e","Type":"ContainerStarted","Data":"d3371d65a507c9afff73c0f18afbd9165155682ac9b8b6889d60175279430819"} Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.790601 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.792505 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.797964 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.798877 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.799039 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.799181 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-6gqgf" Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.810608 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.981539 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/40314306-27de-4c9d-ab86-7499d56d57c6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.981638 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fpm2\" (UniqueName: \"kubernetes.io/projected/40314306-27de-4c9d-ab86-7499d56d57c6-kube-api-access-4fpm2\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.981676 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40314306-27de-4c9d-ab86-7499d56d57c6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.981708 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/40314306-27de-4c9d-ab86-7499d56d57c6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.981738 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40314306-27de-4c9d-ab86-7499d56d57c6-config\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.981760 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.981775 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/40314306-27de-4c9d-ab86-7499d56d57c6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:56 crc kubenswrapper[4849]: I1209 11:44:56.981795 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40314306-27de-4c9d-ab86-7499d56d57c6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.083770 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40314306-27de-4c9d-ab86-7499d56d57c6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.083884 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/40314306-27de-4c9d-ab86-7499d56d57c6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.084045 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fpm2\" (UniqueName: \"kubernetes.io/projected/40314306-27de-4c9d-ab86-7499d56d57c6-kube-api-access-4fpm2\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.084138 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40314306-27de-4c9d-ab86-7499d56d57c6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.084222 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/40314306-27de-4c9d-ab86-7499d56d57c6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.084305 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40314306-27de-4c9d-ab86-7499d56d57c6-config\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.084359 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.084405 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/40314306-27de-4c9d-ab86-7499d56d57c6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.085454 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/40314306-27de-4c9d-ab86-7499d56d57c6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.085604 4849 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.086017 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40314306-27de-4c9d-ab86-7499d56d57c6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.089815 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40314306-27de-4c9d-ab86-7499d56d57c6-config\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.091521 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40314306-27de-4c9d-ab86-7499d56d57c6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.093200 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/40314306-27de-4c9d-ab86-7499d56d57c6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.096187 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/40314306-27de-4c9d-ab86-7499d56d57c6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.107187 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fpm2\" (UniqueName: \"kubernetes.io/projected/40314306-27de-4c9d-ab86-7499d56d57c6-kube-api-access-4fpm2\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.110791 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"40314306-27de-4c9d-ab86-7499d56d57c6\") " pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:57 crc kubenswrapper[4849]: I1209 11:44:57.123972 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 09 11:44:58 crc kubenswrapper[4849]: I1209 11:44:58.700087 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-s5gch"] Dec 09 11:44:58 crc kubenswrapper[4849]: I1209 11:44:58.711529 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 11:44:58 crc kubenswrapper[4849]: I1209 11:44:58.839307 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 11:45:00 crc kubenswrapper[4849]: I1209 11:45:00.159472 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl"] Dec 09 11:45:00 crc kubenswrapper[4849]: I1209 11:45:00.161173 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" Dec 09 11:45:00 crc kubenswrapper[4849]: I1209 11:45:00.163044 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 11:45:00 crc kubenswrapper[4849]: I1209 11:45:00.163315 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 11:45:00 crc kubenswrapper[4849]: I1209 11:45:00.172174 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl"] Dec 09 11:45:00 crc kubenswrapper[4849]: I1209 11:45:00.338085 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rbsf\" (UniqueName: \"kubernetes.io/projected/cb794626-528c-420e-bdf1-ae2ae55d217c-kube-api-access-4rbsf\") pod \"collect-profiles-29421345-24jcl\" (UID: \"cb794626-528c-420e-bdf1-ae2ae55d217c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" Dec 09 11:45:00 crc kubenswrapper[4849]: I1209 11:45:00.338170 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb794626-528c-420e-bdf1-ae2ae55d217c-secret-volume\") pod \"collect-profiles-29421345-24jcl\" (UID: \"cb794626-528c-420e-bdf1-ae2ae55d217c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" Dec 09 11:45:00 crc kubenswrapper[4849]: I1209 11:45:00.338216 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb794626-528c-420e-bdf1-ae2ae55d217c-config-volume\") pod \"collect-profiles-29421345-24jcl\" (UID: \"cb794626-528c-420e-bdf1-ae2ae55d217c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" Dec 09 11:45:00 crc kubenswrapper[4849]: I1209 11:45:00.439462 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rbsf\" (UniqueName: \"kubernetes.io/projected/cb794626-528c-420e-bdf1-ae2ae55d217c-kube-api-access-4rbsf\") pod \"collect-profiles-29421345-24jcl\" (UID: \"cb794626-528c-420e-bdf1-ae2ae55d217c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" Dec 09 11:45:00 crc kubenswrapper[4849]: I1209 11:45:00.439552 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb794626-528c-420e-bdf1-ae2ae55d217c-secret-volume\") pod \"collect-profiles-29421345-24jcl\" (UID: \"cb794626-528c-420e-bdf1-ae2ae55d217c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" Dec 09 11:45:00 crc kubenswrapper[4849]: I1209 11:45:00.439590 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb794626-528c-420e-bdf1-ae2ae55d217c-config-volume\") pod \"collect-profiles-29421345-24jcl\" (UID: \"cb794626-528c-420e-bdf1-ae2ae55d217c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" Dec 09 11:45:00 crc kubenswrapper[4849]: I1209 11:45:00.440622 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb794626-528c-420e-bdf1-ae2ae55d217c-config-volume\") pod \"collect-profiles-29421345-24jcl\" (UID: \"cb794626-528c-420e-bdf1-ae2ae55d217c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" Dec 09 11:45:00 crc kubenswrapper[4849]: I1209 11:45:00.452430 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb794626-528c-420e-bdf1-ae2ae55d217c-secret-volume\") pod \"collect-profiles-29421345-24jcl\" (UID: \"cb794626-528c-420e-bdf1-ae2ae55d217c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" Dec 09 11:45:00 crc kubenswrapper[4849]: I1209 11:45:00.455142 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rbsf\" (UniqueName: \"kubernetes.io/projected/cb794626-528c-420e-bdf1-ae2ae55d217c-kube-api-access-4rbsf\") pod \"collect-profiles-29421345-24jcl\" (UID: \"cb794626-528c-420e-bdf1-ae2ae55d217c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" Dec 09 11:45:00 crc kubenswrapper[4849]: I1209 11:45:00.511392 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" Dec 09 11:45:03 crc kubenswrapper[4849]: W1209 11:45:03.990883 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbb8ec61_588d_43ff_8597_eddb7a747106.slice/crio-039005dc72c2afefdb823abc8146ed2a6841f4f6e4c1d33eaf7bcd074c0f1545 WatchSource:0}: Error finding container 039005dc72c2afefdb823abc8146ed2a6841f4f6e4c1d33eaf7bcd074c0f1545: Status 404 returned error can't find the container with id 039005dc72c2afefdb823abc8146ed2a6841f4f6e4c1d33eaf7bcd074c0f1545 Dec 09 11:45:03 crc kubenswrapper[4849]: W1209 11:45:03.993817 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedb06c44_5bf3_44c9_8db2_9e9f1b6bab2c.slice/crio-1240ec7d0abc8748b4c60e08c88b47b20542f0f8693914755bd2cf1ebd679ed7 WatchSource:0}: Error finding container 1240ec7d0abc8748b4c60e08c88b47b20542f0f8693914755bd2cf1ebd679ed7: Status 404 returned error can't find the container with id 1240ec7d0abc8748b4c60e08c88b47b20542f0f8693914755bd2cf1ebd679ed7 Dec 09 11:45:03 crc kubenswrapper[4849]: W1209 11:45:03.995866 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40314306_27de_4c9d_ab86_7499d56d57c6.slice/crio-39e41866204c415c252d5512e118e4abe05aef35b3a1ba478239f0014992d339 WatchSource:0}: Error finding container 39e41866204c415c252d5512e118e4abe05aef35b3a1ba478239f0014992d339: Status 404 returned error can't find the container with id 39e41866204c415c252d5512e118e4abe05aef35b3a1ba478239f0014992d339 Dec 09 11:45:04 crc kubenswrapper[4849]: I1209 11:45:04.828384 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bbb8ec61-588d-43ff-8597-eddb7a747106","Type":"ContainerStarted","Data":"039005dc72c2afefdb823abc8146ed2a6841f4f6e4c1d33eaf7bcd074c0f1545"} Dec 09 11:45:04 crc kubenswrapper[4849]: I1209 11:45:04.830003 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"40314306-27de-4c9d-ab86-7499d56d57c6","Type":"ContainerStarted","Data":"39e41866204c415c252d5512e118e4abe05aef35b3a1ba478239f0014992d339"} Dec 09 11:45:04 crc kubenswrapper[4849]: I1209 11:45:04.831384 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-s5gch" event={"ID":"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c","Type":"ContainerStarted","Data":"1240ec7d0abc8748b4c60e08c88b47b20542f0f8693914755bd2cf1ebd679ed7"} Dec 09 11:45:05 crc kubenswrapper[4849]: E1209 11:45:05.110304 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 09 11:45:05 crc kubenswrapper[4849]: E1209 11:45:05.110527 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8h4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(9e5432d8-b092-46cd-8aab-cb194ebb23f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:45:05 crc kubenswrapper[4849]: E1209 11:45:05.111692 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9e5432d8-b092-46cd-8aab-cb194ebb23f7" Dec 09 11:45:05 crc kubenswrapper[4849]: E1209 11:45:05.840615 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9e5432d8-b092-46cd-8aab-cb194ebb23f7" Dec 09 11:45:07 crc kubenswrapper[4849]: E1209 11:45:07.240703 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 09 11:45:07 crc kubenswrapper[4849]: E1209 11:45:07.241093 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cz9rl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(86df3233-1d99-4023-9ff7-55bab063bd7e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:45:07 crc kubenswrapper[4849]: E1209 11:45:07.242445 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="86df3233-1d99-4023-9ff7-55bab063bd7e" Dec 09 11:45:07 crc kubenswrapper[4849]: E1209 11:45:07.856282 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="86df3233-1d99-4023-9ff7-55bab063bd7e" Dec 09 11:45:13 crc kubenswrapper[4849]: E1209 11:45:13.403100 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 09 11:45:13 crc kubenswrapper[4849]: E1209 11:45:13.403866 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-77rvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(f78d8a52-1a90-4413-acb9-3925dfa4f1f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:45:13 crc kubenswrapper[4849]: E1209 11:45:13.405093 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="f78d8a52-1a90-4413-acb9-3925dfa4f1f0" Dec 09 11:45:13 crc kubenswrapper[4849]: E1209 11:45:13.490007 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 09 11:45:13 crc kubenswrapper[4849]: E1209 11:45:13.490192 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkr97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(574c9a8a-6aaf-4344-b566-039bf65b788d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:45:13 crc kubenswrapper[4849]: E1209 11:45:13.491383 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="574c9a8a-6aaf-4344-b566-039bf65b788d" Dec 09 11:45:13 crc kubenswrapper[4849]: E1209 11:45:13.926605 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="574c9a8a-6aaf-4344-b566-039bf65b788d" Dec 09 11:45:13 crc kubenswrapper[4849]: E1209 11:45:13.927616 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="f78d8a52-1a90-4413-acb9-3925dfa4f1f0" Dec 09 11:45:14 crc kubenswrapper[4849]: E1209 11:45:14.323669 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Dec 09 11:45:14 crc kubenswrapper[4849]: E1209 11:45:14.323874 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n66ch547h97hd7h5c8h74h5fh644h87h5b4h86h687h57h598h5d7h64fh54bhf6h85h68fh666h6hb7hdch5ddh68fh574h58bh55dh686h547h98q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbf2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:45:14 crc kubenswrapper[4849]: E1209 11:45:14.325255 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce" Dec 09 11:45:14 crc kubenswrapper[4849]: E1209 11:45:14.933855 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce" Dec 09 11:45:15 crc kubenswrapper[4849]: E1209 11:45:15.179381 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 11:45:15 crc kubenswrapper[4849]: E1209 11:45:15.179891 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bscxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-7q2m9_openstack(0d5fa528-442d-4bd3-9f50-244203377ad8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:45:15 crc kubenswrapper[4849]: E1209 11:45:15.181036 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-7q2m9" podUID="0d5fa528-442d-4bd3-9f50-244203377ad8" Dec 09 11:45:15 crc kubenswrapper[4849]: E1209 11:45:15.183696 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 11:45:15 crc kubenswrapper[4849]: E1209 11:45:15.183821 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dtd7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-94rpx_openstack(ede5a785-c680-4bc2-9e42-a7e0edaf7028): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:45:15 crc kubenswrapper[4849]: E1209 11:45:15.184970 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-94rpx" podUID="ede5a785-c680-4bc2-9e42-a7e0edaf7028" Dec 09 11:45:15 crc kubenswrapper[4849]: E1209 11:45:15.191894 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 11:45:15 crc kubenswrapper[4849]: E1209 11:45:15.192082 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xlr9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-ghwxh_openstack(a764cc5b-6d18-4193-8f44-2b0224d368e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:45:15 crc kubenswrapper[4849]: E1209 11:45:15.193600 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" podUID="a764cc5b-6d18-4193-8f44-2b0224d368e7" Dec 09 11:45:15 crc kubenswrapper[4849]: E1209 11:45:15.942298 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-94rpx" podUID="ede5a785-c680-4bc2-9e42-a7e0edaf7028" Dec 09 11:45:15 crc kubenswrapper[4849]: E1209 11:45:15.942643 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" podUID="a764cc5b-6d18-4193-8f44-2b0224d368e7" Dec 09 11:45:16 crc kubenswrapper[4849]: E1209 11:45:16.743621 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 11:45:16 crc kubenswrapper[4849]: E1209 11:45:16.744209 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkdnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-9lcg8_openstack(28db2c1e-d9fa-44a8-be16-425e0dd72ba1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:45:16 crc kubenswrapper[4849]: E1209 11:45:16.745495 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" podUID="28db2c1e-d9fa-44a8-be16-425e0dd72ba1" Dec 09 11:45:21 crc kubenswrapper[4849]: I1209 11:45:21.132886 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:45:21 crc kubenswrapper[4849]: I1209 11:45:21.133241 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:45:21 crc kubenswrapper[4849]: I1209 11:45:21.406449 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7q2m9" Dec 09 11:45:21 crc kubenswrapper[4849]: I1209 11:45:21.417770 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bscxq\" (UniqueName: \"kubernetes.io/projected/0d5fa528-442d-4bd3-9f50-244203377ad8-kube-api-access-bscxq\") pod \"0d5fa528-442d-4bd3-9f50-244203377ad8\" (UID: \"0d5fa528-442d-4bd3-9f50-244203377ad8\") " Dec 09 11:45:21 crc kubenswrapper[4849]: I1209 11:45:21.417828 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d5fa528-442d-4bd3-9f50-244203377ad8-config\") pod \"0d5fa528-442d-4bd3-9f50-244203377ad8\" (UID: \"0d5fa528-442d-4bd3-9f50-244203377ad8\") " Dec 09 11:45:21 crc kubenswrapper[4849]: I1209 11:45:21.418553 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d5fa528-442d-4bd3-9f50-244203377ad8-config" (OuterVolumeSpecName: "config") pod "0d5fa528-442d-4bd3-9f50-244203377ad8" (UID: "0d5fa528-442d-4bd3-9f50-244203377ad8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:21 crc kubenswrapper[4849]: I1209 11:45:21.425613 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d5fa528-442d-4bd3-9f50-244203377ad8-kube-api-access-bscxq" (OuterVolumeSpecName: "kube-api-access-bscxq") pod "0d5fa528-442d-4bd3-9f50-244203377ad8" (UID: "0d5fa528-442d-4bd3-9f50-244203377ad8"). InnerVolumeSpecName "kube-api-access-bscxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:45:21 crc kubenswrapper[4849]: I1209 11:45:21.519318 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bscxq\" (UniqueName: \"kubernetes.io/projected/0d5fa528-442d-4bd3-9f50-244203377ad8-kube-api-access-bscxq\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:21 crc kubenswrapper[4849]: I1209 11:45:21.519353 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d5fa528-442d-4bd3-9f50-244203377ad8-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:21 crc kubenswrapper[4849]: I1209 11:45:21.982597 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7q2m9" event={"ID":"0d5fa528-442d-4bd3-9f50-244203377ad8","Type":"ContainerDied","Data":"fa57833e27aeb7b7874748ae6a42eb1d87bb777cdb18ea62d992e274ea48321b"} Dec 09 11:45:21 crc kubenswrapper[4849]: I1209 11:45:21.982632 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7q2m9" Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.042921 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.084506 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7q2m9"] Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.089390 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7q2m9"] Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.232394 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-config\") pod \"28db2c1e-d9fa-44a8-be16-425e0dd72ba1\" (UID: \"28db2c1e-d9fa-44a8-be16-425e0dd72ba1\") " Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.232496 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkdnv\" (UniqueName: \"kubernetes.io/projected/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-kube-api-access-qkdnv\") pod \"28db2c1e-d9fa-44a8-be16-425e0dd72ba1\" (UID: \"28db2c1e-d9fa-44a8-be16-425e0dd72ba1\") " Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.232698 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-dns-svc\") pod \"28db2c1e-d9fa-44a8-be16-425e0dd72ba1\" (UID: \"28db2c1e-d9fa-44a8-be16-425e0dd72ba1\") " Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.232940 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-config" (OuterVolumeSpecName: "config") pod "28db2c1e-d9fa-44a8-be16-425e0dd72ba1" (UID: "28db2c1e-d9fa-44a8-be16-425e0dd72ba1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.233131 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.233355 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "28db2c1e-d9fa-44a8-be16-425e0dd72ba1" (UID: "28db2c1e-d9fa-44a8-be16-425e0dd72ba1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.236168 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-kube-api-access-qkdnv" (OuterVolumeSpecName: "kube-api-access-qkdnv") pod "28db2c1e-d9fa-44a8-be16-425e0dd72ba1" (UID: "28db2c1e-d9fa-44a8-be16-425e0dd72ba1"). InnerVolumeSpecName "kube-api-access-qkdnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.334461 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkdnv\" (UniqueName: \"kubernetes.io/projected/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-kube-api-access-qkdnv\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.334509 4849 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28db2c1e-d9fa-44a8-be16-425e0dd72ba1-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.561436 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d5fa528-442d-4bd3-9f50-244203377ad8" path="/var/lib/kubelet/pods/0d5fa528-442d-4bd3-9f50-244203377ad8/volumes" Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.869879 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl"] Dec 09 11:45:22 crc kubenswrapper[4849]: W1209 11:45:22.945594 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb794626_528c_420e_bdf1_ae2ae55d217c.slice/crio-49ee9e8c72c3e91cd619e68efd7179ebeefee070b9d258dda934c16255623f8b WatchSource:0}: Error finding container 49ee9e8c72c3e91cd619e68efd7179ebeefee070b9d258dda934c16255623f8b: Status 404 returned error can't find the container with id 49ee9e8c72c3e91cd619e68efd7179ebeefee070b9d258dda934c16255623f8b Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.990003 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" event={"ID":"28db2c1e-d9fa-44a8-be16-425e0dd72ba1","Type":"ContainerDied","Data":"856e0a9d275d9751852fc2129a2228d6c5cefcf80d5eae00d11d4e90f94c07e8"} Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.990034 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9lcg8" Dec 09 11:45:22 crc kubenswrapper[4849]: I1209 11:45:22.993049 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" event={"ID":"cb794626-528c-420e-bdf1-ae2ae55d217c","Type":"ContainerStarted","Data":"49ee9e8c72c3e91cd619e68efd7179ebeefee070b9d258dda934c16255623f8b"} Dec 09 11:45:23 crc kubenswrapper[4849]: I1209 11:45:23.043036 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9lcg8"] Dec 09 11:45:23 crc kubenswrapper[4849]: I1209 11:45:23.052617 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9lcg8"] Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.007154 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-s5gch" event={"ID":"edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c","Type":"ContainerStarted","Data":"0a4fc7e8eabf53ed22dd5b5cbda5c5850c23f58c2f958a210e1f9702f2d50207"} Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.012611 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"31f3ac0d-dbb7-4371-8718-ddfafd5481f7","Type":"ContainerStarted","Data":"194e5c617c80510f556f08b123a5da6fd8d5780b77d8e84e7287ea684ea14bbb"} Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.012753 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.019968 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-chw84" event={"ID":"47f40834-5de4-472b-a069-579d98cff69e","Type":"ContainerStarted","Data":"48e3d03cc8b76c62fc7befe52a31929329f91c93633dd5e53f371f82bd7deb30"} Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.023774 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bbb8ec61-588d-43ff-8597-eddb7a747106","Type":"ContainerStarted","Data":"e665b540b907cd055f13d514951e3c40f0359a45b9e561d76ad9d74af62ab26e"} Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.030947 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"40314306-27de-4c9d-ab86-7499d56d57c6","Type":"ContainerStarted","Data":"8fb536287bf76e808c222c823dba25a0c1e25b5f6a1dbd855d0cf2768ea66c43"} Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.032153 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-s5gch" podStartSLOduration=9.955916337 podStartE2EDuration="29.032127357s" podCreationTimestamp="2025-12-09 11:44:55 +0000 UTC" firstStartedPulling="2025-12-09 11:45:04.000033576 +0000 UTC m=+1086.539917932" lastFinishedPulling="2025-12-09 11:45:23.076244626 +0000 UTC m=+1105.616128952" observedRunningTime="2025-12-09 11:45:24.026554234 +0000 UTC m=+1106.566438550" watchObservedRunningTime="2025-12-09 11:45:24.032127357 +0000 UTC m=+1106.572011693" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.036461 4849 generic.go:334] "Generic (PLEG): container finished" podID="cb794626-528c-420e-bdf1-ae2ae55d217c" containerID="c54178bf995d8df83d603efcd6513046e6169e27fa247159c5aa93f8d7805bc8" exitCode=0 Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.036526 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" event={"ID":"cb794626-528c-420e-bdf1-ae2ae55d217c","Type":"ContainerDied","Data":"c54178bf995d8df83d603efcd6513046e6169e27fa247159c5aa93f8d7805bc8"} Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.040823 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-czrmh" event={"ID":"69a39d69-d705-4246-bc77-cbdd3fadfefa","Type":"ContainerStarted","Data":"a37cc7e4c9b068aab0a8451fa747ba6123b9b349cb8ffe939b60c2ac2482cc3d"} Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.041581 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-czrmh" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.049441 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.087542277 podStartE2EDuration="35.04942432s" podCreationTimestamp="2025-12-09 11:44:49 +0000 UTC" firstStartedPulling="2025-12-09 11:44:50.630090238 +0000 UTC m=+1073.169974554" lastFinishedPulling="2025-12-09 11:45:23.591972281 +0000 UTC m=+1106.131856597" observedRunningTime="2025-12-09 11:45:24.048042569 +0000 UTC m=+1106.587926895" watchObservedRunningTime="2025-12-09 11:45:24.04942432 +0000 UTC m=+1106.589308636" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.125352 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-czrmh" podStartSLOduration=3.594863612 podStartE2EDuration="33.125333795s" podCreationTimestamp="2025-12-09 11:44:51 +0000 UTC" firstStartedPulling="2025-12-09 11:44:53.470669364 +0000 UTC m=+1076.010553680" lastFinishedPulling="2025-12-09 11:45:23.001139537 +0000 UTC m=+1105.541023863" observedRunningTime="2025-12-09 11:45:24.119954167 +0000 UTC m=+1106.659838473" watchObservedRunningTime="2025-12-09 11:45:24.125333795 +0000 UTC m=+1106.665218111" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.299306 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-94rpx"] Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.356141 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-x849b"] Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.357853 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.361902 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.368476 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-x849b"] Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.479265 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-x849b\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.479325 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-x849b\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.479372 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c7n4\" (UniqueName: \"kubernetes.io/projected/638f2f6c-bb21-46d2-9232-417f74400264-kube-api-access-4c7n4\") pod \"dnsmasq-dns-7fd796d7df-x849b\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.479466 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-config\") pod \"dnsmasq-dns-7fd796d7df-x849b\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.509831 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ghwxh"] Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.569790 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28db2c1e-d9fa-44a8-be16-425e0dd72ba1" path="/var/lib/kubelet/pods/28db2c1e-d9fa-44a8-be16-425e0dd72ba1/volumes" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.580931 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-x849b\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.580988 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-x849b\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.581032 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c7n4\" (UniqueName: \"kubernetes.io/projected/638f2f6c-bb21-46d2-9232-417f74400264-kube-api-access-4c7n4\") pod \"dnsmasq-dns-7fd796d7df-x849b\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.581098 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-config\") pod \"dnsmasq-dns-7fd796d7df-x849b\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.582814 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-x849b\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.583652 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-x849b\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.584114 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-config\") pod \"dnsmasq-dns-7fd796d7df-x849b\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.627127 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vxhgv"] Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.628969 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.631296 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.644151 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vxhgv"] Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.713863 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c7n4\" (UniqueName: \"kubernetes.io/projected/638f2f6c-bb21-46d2-9232-417f74400264-kube-api-access-4c7n4\") pod \"dnsmasq-dns-7fd796d7df-x849b\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.788293 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-vxhgv\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.788687 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-vxhgv\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.788719 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9t7\" (UniqueName: \"kubernetes.io/projected/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-kube-api-access-9z9t7\") pod \"dnsmasq-dns-86db49b7ff-vxhgv\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.788747 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-vxhgv\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.788815 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-config\") pod \"dnsmasq-dns-86db49b7ff-vxhgv\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.804597 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.818703 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-94rpx" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.884927 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.889938 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede5a785-c680-4bc2-9e42-a7e0edaf7028-config\") pod \"ede5a785-c680-4bc2-9e42-a7e0edaf7028\" (UID: \"ede5a785-c680-4bc2-9e42-a7e0edaf7028\") " Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.890047 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede5a785-c680-4bc2-9e42-a7e0edaf7028-dns-svc\") pod \"ede5a785-c680-4bc2-9e42-a7e0edaf7028\" (UID: \"ede5a785-c680-4bc2-9e42-a7e0edaf7028\") " Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.890135 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtd7d\" (UniqueName: \"kubernetes.io/projected/ede5a785-c680-4bc2-9e42-a7e0edaf7028-kube-api-access-dtd7d\") pod \"ede5a785-c680-4bc2-9e42-a7e0edaf7028\" (UID: \"ede5a785-c680-4bc2-9e42-a7e0edaf7028\") " Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.890378 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-vxhgv\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.890431 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-vxhgv\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.890457 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9t7\" (UniqueName: \"kubernetes.io/projected/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-kube-api-access-9z9t7\") pod \"dnsmasq-dns-86db49b7ff-vxhgv\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.890478 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-vxhgv\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.890529 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-config\") pod \"dnsmasq-dns-86db49b7ff-vxhgv\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.891304 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-vxhgv\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.891587 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede5a785-c680-4bc2-9e42-a7e0edaf7028-config" (OuterVolumeSpecName: "config") pod "ede5a785-c680-4bc2-9e42-a7e0edaf7028" (UID: "ede5a785-c680-4bc2-9e42-a7e0edaf7028"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.891803 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede5a785-c680-4bc2-9e42-a7e0edaf7028-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ede5a785-c680-4bc2-9e42-a7e0edaf7028" (UID: "ede5a785-c680-4bc2-9e42-a7e0edaf7028"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.892350 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-config\") pod \"dnsmasq-dns-86db49b7ff-vxhgv\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.892965 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-vxhgv\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.893883 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-vxhgv\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.900371 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede5a785-c680-4bc2-9e42-a7e0edaf7028-kube-api-access-dtd7d" (OuterVolumeSpecName: "kube-api-access-dtd7d") pod "ede5a785-c680-4bc2-9e42-a7e0edaf7028" (UID: "ede5a785-c680-4bc2-9e42-a7e0edaf7028"). InnerVolumeSpecName "kube-api-access-dtd7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.931866 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9t7\" (UniqueName: \"kubernetes.io/projected/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-kube-api-access-9z9t7\") pod \"dnsmasq-dns-86db49b7ff-vxhgv\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.967297 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.993318 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a764cc5b-6d18-4193-8f44-2b0224d368e7-config\") pod \"a764cc5b-6d18-4193-8f44-2b0224d368e7\" (UID: \"a764cc5b-6d18-4193-8f44-2b0224d368e7\") " Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.993386 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlr9l\" (UniqueName: \"kubernetes.io/projected/a764cc5b-6d18-4193-8f44-2b0224d368e7-kube-api-access-xlr9l\") pod \"a764cc5b-6d18-4193-8f44-2b0224d368e7\" (UID: \"a764cc5b-6d18-4193-8f44-2b0224d368e7\") " Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.993433 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a764cc5b-6d18-4193-8f44-2b0224d368e7-dns-svc\") pod \"a764cc5b-6d18-4193-8f44-2b0224d368e7\" (UID: \"a764cc5b-6d18-4193-8f44-2b0224d368e7\") " Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.993742 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtd7d\" (UniqueName: \"kubernetes.io/projected/ede5a785-c680-4bc2-9e42-a7e0edaf7028-kube-api-access-dtd7d\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.993757 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede5a785-c680-4bc2-9e42-a7e0edaf7028-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.993767 4849 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede5a785-c680-4bc2-9e42-a7e0edaf7028-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.994010 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a764cc5b-6d18-4193-8f44-2b0224d368e7-config" (OuterVolumeSpecName: "config") pod "a764cc5b-6d18-4193-8f44-2b0224d368e7" (UID: "a764cc5b-6d18-4193-8f44-2b0224d368e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:24 crc kubenswrapper[4849]: I1209 11:45:24.994291 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a764cc5b-6d18-4193-8f44-2b0224d368e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a764cc5b-6d18-4193-8f44-2b0224d368e7" (UID: "a764cc5b-6d18-4193-8f44-2b0224d368e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.000905 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a764cc5b-6d18-4193-8f44-2b0224d368e7-kube-api-access-xlr9l" (OuterVolumeSpecName: "kube-api-access-xlr9l") pod "a764cc5b-6d18-4193-8f44-2b0224d368e7" (UID: "a764cc5b-6d18-4193-8f44-2b0224d368e7"). InnerVolumeSpecName "kube-api-access-xlr9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.069229 4849 generic.go:334] "Generic (PLEG): container finished" podID="47f40834-5de4-472b-a069-579d98cff69e" containerID="48e3d03cc8b76c62fc7befe52a31929329f91c93633dd5e53f371f82bd7deb30" exitCode=0 Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.069304 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-chw84" event={"ID":"47f40834-5de4-472b-a069-579d98cff69e","Type":"ContainerDied","Data":"48e3d03cc8b76c62fc7befe52a31929329f91c93633dd5e53f371f82bd7deb30"} Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.076215 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bbb8ec61-588d-43ff-8597-eddb7a747106","Type":"ContainerStarted","Data":"f6ea4eb4b9e2f0787fe1a35452807c6fa573be01040eef72b3d183cb4a40f96c"} Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.099885 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"40314306-27de-4c9d-ab86-7499d56d57c6","Type":"ContainerStarted","Data":"d575c464534203d8f482c589b037ba033daea87442a7c1d9dd39b1c9b182c146"} Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.102486 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-94rpx" event={"ID":"ede5a785-c680-4bc2-9e42-a7e0edaf7028","Type":"ContainerDied","Data":"1b6dcfd91a644188cd5cedb51001070357c274525f3ea2c839a3b319f4ca534c"} Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.102573 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-94rpx" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.105082 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a764cc5b-6d18-4193-8f44-2b0224d368e7-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.105104 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlr9l\" (UniqueName: \"kubernetes.io/projected/a764cc5b-6d18-4193-8f44-2b0224d368e7-kube-api-access-xlr9l\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.105117 4849 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a764cc5b-6d18-4193-8f44-2b0224d368e7-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.112478 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e5432d8-b092-46cd-8aab-cb194ebb23f7","Type":"ContainerStarted","Data":"f14aeff8e9a699d29584826d972fe0d7f09877c465bb6ca7e4904b9f39c56663"} Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.142377 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" event={"ID":"a764cc5b-6d18-4193-8f44-2b0224d368e7","Type":"ContainerDied","Data":"457f8026e8965523a660e7b8e0304d6a3e0b74d82040919c977f4ff7f80529bf"} Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.142435 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ghwxh" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.150091 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.143069133 podStartE2EDuration="32.150069228s" podCreationTimestamp="2025-12-09 11:44:53 +0000 UTC" firstStartedPulling="2025-12-09 11:45:03.994116622 +0000 UTC m=+1086.534000938" lastFinishedPulling="2025-12-09 11:45:23.001116707 +0000 UTC m=+1105.541001033" observedRunningTime="2025-12-09 11:45:25.129714528 +0000 UTC m=+1107.669598844" watchObservedRunningTime="2025-12-09 11:45:25.150069228 +0000 UTC m=+1107.689953544" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.164282 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86df3233-1d99-4023-9ff7-55bab063bd7e","Type":"ContainerStarted","Data":"115a468a692fb19ce7f718c4cc8fdb83f4844c63e4e4a649b01cb6f4bca1c956"} Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.231043 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.402723426 podStartE2EDuration="30.231017374s" podCreationTimestamp="2025-12-09 11:44:55 +0000 UTC" firstStartedPulling="2025-12-09 11:45:03.998614422 +0000 UTC m=+1086.538498748" lastFinishedPulling="2025-12-09 11:45:22.82690838 +0000 UTC m=+1105.366792696" observedRunningTime="2025-12-09 11:45:25.206733778 +0000 UTC m=+1107.746618094" watchObservedRunningTime="2025-12-09 11:45:25.231017374 +0000 UTC m=+1107.770901700" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.325947 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-94rpx"] Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.340262 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-94rpx"] Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.404235 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ghwxh"] Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.436864 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ghwxh"] Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.466487 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-x849b"] Dec 09 11:45:25 crc kubenswrapper[4849]: E1209 11:45:25.483812 4849 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda764cc5b_6d18_4193_8f44_2b0224d368e7.slice/crio-457f8026e8965523a660e7b8e0304d6a3e0b74d82040919c977f4ff7f80529bf\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podede5a785_c680_4bc2_9e42_a7e0edaf7028.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podede5a785_c680_4bc2_9e42_a7e0edaf7028.slice/crio-1b6dcfd91a644188cd5cedb51001070357c274525f3ea2c839a3b319f4ca534c\": RecentStats: unable to find data in memory cache]" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.532392 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vxhgv"] Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.687158 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.826518 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb794626-528c-420e-bdf1-ae2ae55d217c-secret-volume\") pod \"cb794626-528c-420e-bdf1-ae2ae55d217c\" (UID: \"cb794626-528c-420e-bdf1-ae2ae55d217c\") " Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.826616 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb794626-528c-420e-bdf1-ae2ae55d217c-config-volume\") pod \"cb794626-528c-420e-bdf1-ae2ae55d217c\" (UID: \"cb794626-528c-420e-bdf1-ae2ae55d217c\") " Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.826672 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rbsf\" (UniqueName: \"kubernetes.io/projected/cb794626-528c-420e-bdf1-ae2ae55d217c-kube-api-access-4rbsf\") pod \"cb794626-528c-420e-bdf1-ae2ae55d217c\" (UID: \"cb794626-528c-420e-bdf1-ae2ae55d217c\") " Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.827359 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb794626-528c-420e-bdf1-ae2ae55d217c-config-volume" (OuterVolumeSpecName: "config-volume") pod "cb794626-528c-420e-bdf1-ae2ae55d217c" (UID: "cb794626-528c-420e-bdf1-ae2ae55d217c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.832642 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb794626-528c-420e-bdf1-ae2ae55d217c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cb794626-528c-420e-bdf1-ae2ae55d217c" (UID: "cb794626-528c-420e-bdf1-ae2ae55d217c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.832946 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb794626-528c-420e-bdf1-ae2ae55d217c-kube-api-access-4rbsf" (OuterVolumeSpecName: "kube-api-access-4rbsf") pod "cb794626-528c-420e-bdf1-ae2ae55d217c" (UID: "cb794626-528c-420e-bdf1-ae2ae55d217c"). InnerVolumeSpecName "kube-api-access-4rbsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.929085 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rbsf\" (UniqueName: \"kubernetes.io/projected/cb794626-528c-420e-bdf1-ae2ae55d217c-kube-api-access-4rbsf\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.929158 4849 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb794626-528c-420e-bdf1-ae2ae55d217c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:25 crc kubenswrapper[4849]: I1209 11:45:25.929175 4849 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb794626-528c-420e-bdf1-ae2ae55d217c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:26 crc kubenswrapper[4849]: I1209 11:45:26.175030 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f78d8a52-1a90-4413-acb9-3925dfa4f1f0","Type":"ContainerStarted","Data":"b9115513ead9b48aa65d202722e92292c4f23dc1a4cfb1b5ddd8d654c2fc51be"} Dec 09 11:45:26 crc kubenswrapper[4849]: I1209 11:45:26.176606 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-x849b" event={"ID":"638f2f6c-bb21-46d2-9232-417f74400264","Type":"ContainerStarted","Data":"e432b9f14b2f8a6ec0e4020d23749b72ae3ae1027a299f078f8fcb94ca48b16b"} Dec 09 11:45:26 crc kubenswrapper[4849]: I1209 11:45:26.178596 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" event={"ID":"cb794626-528c-420e-bdf1-ae2ae55d217c","Type":"ContainerDied","Data":"49ee9e8c72c3e91cd619e68efd7179ebeefee070b9d258dda934c16255623f8b"} Dec 09 11:45:26 crc kubenswrapper[4849]: I1209 11:45:26.178636 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49ee9e8c72c3e91cd619e68efd7179ebeefee070b9d258dda934c16255623f8b" Dec 09 11:45:26 crc kubenswrapper[4849]: I1209 11:45:26.178695 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421345-24jcl" Dec 09 11:45:26 crc kubenswrapper[4849]: I1209 11:45:26.183329 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" event={"ID":"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9","Type":"ContainerStarted","Data":"861b56c1c77b2af2ec1e513843b2aa00e28fb26ee3a4e65ed88c7ce799fadff8"} Dec 09 11:45:26 crc kubenswrapper[4849]: I1209 11:45:26.198018 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-chw84" event={"ID":"47f40834-5de4-472b-a069-579d98cff69e","Type":"ContainerStarted","Data":"d7685ee14cca7078e3fd1954f7b77fd3a4f0973350187a577710bc1f496f9ca0"} Dec 09 11:45:26 crc kubenswrapper[4849]: I1209 11:45:26.198058 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-chw84" event={"ID":"47f40834-5de4-472b-a069-579d98cff69e","Type":"ContainerStarted","Data":"edabe9a67ce9bffb3c2c6ecb5da6a6b28f841f59c2b81bdaaa515af061c5e4d8"} Dec 09 11:45:26 crc kubenswrapper[4849]: I1209 11:45:26.198079 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:45:26 crc kubenswrapper[4849]: I1209 11:45:26.199274 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:45:26 crc kubenswrapper[4849]: I1209 11:45:26.236702 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-chw84" podStartSLOduration=9.28605994 podStartE2EDuration="35.236679696s" podCreationTimestamp="2025-12-09 11:44:51 +0000 UTC" firstStartedPulling="2025-12-09 11:44:56.525606173 +0000 UTC m=+1079.065490499" lastFinishedPulling="2025-12-09 11:45:22.476225929 +0000 UTC m=+1105.016110255" observedRunningTime="2025-12-09 11:45:26.232026513 +0000 UTC m=+1108.771910829" watchObservedRunningTime="2025-12-09 11:45:26.236679696 +0000 UTC m=+1108.776564022" Dec 09 11:45:26 crc kubenswrapper[4849]: I1209 11:45:26.552283 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a764cc5b-6d18-4193-8f44-2b0224d368e7" path="/var/lib/kubelet/pods/a764cc5b-6d18-4193-8f44-2b0224d368e7/volumes" Dec 09 11:45:26 crc kubenswrapper[4849]: I1209 11:45:26.552787 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede5a785-c680-4bc2-9e42-a7e0edaf7028" path="/var/lib/kubelet/pods/ede5a785-c680-4bc2-9e42-a7e0edaf7028/volumes" Dec 09 11:45:27 crc kubenswrapper[4849]: I1209 11:45:27.124165 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 09 11:45:27 crc kubenswrapper[4849]: I1209 11:45:27.124559 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 09 11:45:27 crc kubenswrapper[4849]: I1209 11:45:27.169248 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 09 11:45:27 crc kubenswrapper[4849]: I1209 11:45:27.209520 4849 generic.go:334] "Generic (PLEG): container finished" podID="e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9" containerID="f0722374ea33d17bded4962684ec2dd05380544139829e16935e9acdf0bfadf9" exitCode=0 Dec 09 11:45:27 crc kubenswrapper[4849]: I1209 11:45:27.209594 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" event={"ID":"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9","Type":"ContainerDied","Data":"f0722374ea33d17bded4962684ec2dd05380544139829e16935e9acdf0bfadf9"} Dec 09 11:45:27 crc kubenswrapper[4849]: I1209 11:45:27.214608 4849 generic.go:334] "Generic (PLEG): container finished" podID="638f2f6c-bb21-46d2-9232-417f74400264" containerID="3cd471b6947a4067c9f75e12194bea00f5931b012c5dd446d5e8b83fe37d5524" exitCode=0 Dec 09 11:45:27 crc kubenswrapper[4849]: I1209 11:45:27.214680 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-x849b" event={"ID":"638f2f6c-bb21-46d2-9232-417f74400264","Type":"ContainerDied","Data":"3cd471b6947a4067c9f75e12194bea00f5931b012c5dd446d5e8b83fe37d5524"} Dec 09 11:45:27 crc kubenswrapper[4849]: I1209 11:45:27.218327 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"574c9a8a-6aaf-4344-b566-039bf65b788d","Type":"ContainerStarted","Data":"9798a1b65eb5da225b3eb51124d33ddd1322fb66b128ed21cf754f78a7dead16"} Dec 09 11:45:27 crc kubenswrapper[4849]: I1209 11:45:27.405489 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 09 11:45:27 crc kubenswrapper[4849]: I1209 11:45:27.510421 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.225903 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce","Type":"ContainerStarted","Data":"94daa341327140d3d5e1234bc54532a2b610d45f4af2000269b9b80bf9c626d3"} Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.227226 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.229555 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-x849b" event={"ID":"638f2f6c-bb21-46d2-9232-417f74400264","Type":"ContainerStarted","Data":"e633ad148debfefa66d29d634c0554e8b3e356dc3930473e60233d31169b7986"} Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.230244 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.232922 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" event={"ID":"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9","Type":"ContainerStarted","Data":"0b0abbb896d4f2a29eadeded67dc3b2b9705c1bee2c164d6e717b4a010e2735b"} Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.232955 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.232969 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.262506 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.629343572 podStartE2EDuration="41.262487528s" podCreationTimestamp="2025-12-09 11:44:47 +0000 UTC" firstStartedPulling="2025-12-09 11:44:48.561340195 +0000 UTC m=+1071.101224501" lastFinishedPulling="2025-12-09 11:45:27.194484131 +0000 UTC m=+1109.734368457" observedRunningTime="2025-12-09 11:45:28.25576164 +0000 UTC m=+1110.795645966" watchObservedRunningTime="2025-12-09 11:45:28.262487528 +0000 UTC m=+1110.802371844" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.278906 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-x849b" podStartSLOduration=3.763868611 podStartE2EDuration="4.27888426s" podCreationTimestamp="2025-12-09 11:45:24 +0000 UTC" firstStartedPulling="2025-12-09 11:45:25.44822155 +0000 UTC m=+1107.988105866" lastFinishedPulling="2025-12-09 11:45:25.963237199 +0000 UTC m=+1108.503121515" observedRunningTime="2025-12-09 11:45:28.276090098 +0000 UTC m=+1110.815974414" watchObservedRunningTime="2025-12-09 11:45:28.27888426 +0000 UTC m=+1110.818768596" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.298602 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.300602 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.306273 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" podStartSLOduration=3.8646641859999997 podStartE2EDuration="4.306250894s" podCreationTimestamp="2025-12-09 11:45:24 +0000 UTC" firstStartedPulling="2025-12-09 11:45:25.582683908 +0000 UTC m=+1108.122568224" lastFinishedPulling="2025-12-09 11:45:26.024270606 +0000 UTC m=+1108.564154932" observedRunningTime="2025-12-09 11:45:28.29837298 +0000 UTC m=+1110.838257296" watchObservedRunningTime="2025-12-09 11:45:28.306250894 +0000 UTC m=+1110.846135210" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.774138 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 09 11:45:28 crc kubenswrapper[4849]: E1209 11:45:28.774513 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb794626-528c-420e-bdf1-ae2ae55d217c" containerName="collect-profiles" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.774535 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb794626-528c-420e-bdf1-ae2ae55d217c" containerName="collect-profiles" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.774683 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb794626-528c-420e-bdf1-ae2ae55d217c" containerName="collect-profiles" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.775595 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.780211 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.786574 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.786747 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.786574 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-qrb6n" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.815936 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.833534 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c816f3aa-88ea-408d-a6e1-dd1e962688c6-scripts\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.833633 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c816f3aa-88ea-408d-a6e1-dd1e962688c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.833702 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c816f3aa-88ea-408d-a6e1-dd1e962688c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.833740 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c816f3aa-88ea-408d-a6e1-dd1e962688c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.833824 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwqvc\" (UniqueName: \"kubernetes.io/projected/c816f3aa-88ea-408d-a6e1-dd1e962688c6-kube-api-access-mwqvc\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.833873 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c816f3aa-88ea-408d-a6e1-dd1e962688c6-config\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.833904 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c816f3aa-88ea-408d-a6e1-dd1e962688c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.934967 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c816f3aa-88ea-408d-a6e1-dd1e962688c6-config\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.935031 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c816f3aa-88ea-408d-a6e1-dd1e962688c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.935070 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c816f3aa-88ea-408d-a6e1-dd1e962688c6-scripts\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.935127 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c816f3aa-88ea-408d-a6e1-dd1e962688c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.935177 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c816f3aa-88ea-408d-a6e1-dd1e962688c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.935210 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c816f3aa-88ea-408d-a6e1-dd1e962688c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.935292 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwqvc\" (UniqueName: \"kubernetes.io/projected/c816f3aa-88ea-408d-a6e1-dd1e962688c6-kube-api-access-mwqvc\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.935633 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c816f3aa-88ea-408d-a6e1-dd1e962688c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.935918 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c816f3aa-88ea-408d-a6e1-dd1e962688c6-config\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.936515 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c816f3aa-88ea-408d-a6e1-dd1e962688c6-scripts\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.940867 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c816f3aa-88ea-408d-a6e1-dd1e962688c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.941727 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c816f3aa-88ea-408d-a6e1-dd1e962688c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.943225 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c816f3aa-88ea-408d-a6e1-dd1e962688c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:28 crc kubenswrapper[4849]: I1209 11:45:28.960924 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwqvc\" (UniqueName: \"kubernetes.io/projected/c816f3aa-88ea-408d-a6e1-dd1e962688c6-kube-api-access-mwqvc\") pod \"ovn-northd-0\" (UID: \"c816f3aa-88ea-408d-a6e1-dd1e962688c6\") " pod="openstack/ovn-northd-0" Dec 09 11:45:29 crc kubenswrapper[4849]: I1209 11:45:29.100143 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 09 11:45:29 crc kubenswrapper[4849]: I1209 11:45:29.573589 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 09 11:45:29 crc kubenswrapper[4849]: I1209 11:45:29.612689 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 09 11:45:29 crc kubenswrapper[4849]: W1209 11:45:29.634076 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc816f3aa_88ea_408d_a6e1_dd1e962688c6.slice/crio-4304a8c86335ca95132e49ef0f57012a659db3cd463e7d0f2b92db3198fc494c WatchSource:0}: Error finding container 4304a8c86335ca95132e49ef0f57012a659db3cd463e7d0f2b92db3198fc494c: Status 404 returned error can't find the container with id 4304a8c86335ca95132e49ef0f57012a659db3cd463e7d0f2b92db3198fc494c Dec 09 11:45:30 crc kubenswrapper[4849]: I1209 11:45:30.260499 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c816f3aa-88ea-408d-a6e1-dd1e962688c6","Type":"ContainerStarted","Data":"4304a8c86335ca95132e49ef0f57012a659db3cd463e7d0f2b92db3198fc494c"} Dec 09 11:45:32 crc kubenswrapper[4849]: I1209 11:45:32.810516 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 09 11:45:33 crc kubenswrapper[4849]: I1209 11:45:33.281326 4849 generic.go:334] "Generic (PLEG): container finished" podID="f78d8a52-1a90-4413-acb9-3925dfa4f1f0" containerID="b9115513ead9b48aa65d202722e92292c4f23dc1a4cfb1b5ddd8d654c2fc51be" exitCode=0 Dec 09 11:45:33 crc kubenswrapper[4849]: I1209 11:45:33.281387 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f78d8a52-1a90-4413-acb9-3925dfa4f1f0","Type":"ContainerDied","Data":"b9115513ead9b48aa65d202722e92292c4f23dc1a4cfb1b5ddd8d654c2fc51be"} Dec 09 11:45:33 crc kubenswrapper[4849]: I1209 11:45:33.285855 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c816f3aa-88ea-408d-a6e1-dd1e962688c6","Type":"ContainerStarted","Data":"23e060c2fa2443f0c62599211a279c8d8b542098c2f4127a708afb8bd27ec44e"} Dec 09 11:45:33 crc kubenswrapper[4849]: I1209 11:45:33.285909 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c816f3aa-88ea-408d-a6e1-dd1e962688c6","Type":"ContainerStarted","Data":"3c32d90defb6c5e362c8269c32e92ae9fb96587d857dec0e3d7589b5e4186e17"} Dec 09 11:45:33 crc kubenswrapper[4849]: I1209 11:45:33.286234 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 09 11:45:33 crc kubenswrapper[4849]: I1209 11:45:33.334271 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.877418557 podStartE2EDuration="5.334250363s" podCreationTimestamp="2025-12-09 11:45:28 +0000 UTC" firstStartedPulling="2025-12-09 11:45:29.636256446 +0000 UTC m=+1112.176140762" lastFinishedPulling="2025-12-09 11:45:32.093088252 +0000 UTC m=+1114.632972568" observedRunningTime="2025-12-09 11:45:33.328492106 +0000 UTC m=+1115.868376432" watchObservedRunningTime="2025-12-09 11:45:33.334250363 +0000 UTC m=+1115.874134689" Dec 09 11:45:34 crc kubenswrapper[4849]: I1209 11:45:34.293467 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f78d8a52-1a90-4413-acb9-3925dfa4f1f0","Type":"ContainerStarted","Data":"5355c916748ee05cff7aab40a775dc71318305071100144673384a678b004ae9"} Dec 09 11:45:34 crc kubenswrapper[4849]: I1209 11:45:34.315632 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=12.304630179 podStartE2EDuration="50.315614787s" podCreationTimestamp="2025-12-09 11:44:44 +0000 UTC" firstStartedPulling="2025-12-09 11:44:47.10605989 +0000 UTC m=+1069.645944206" lastFinishedPulling="2025-12-09 11:45:25.117044498 +0000 UTC m=+1107.656928814" observedRunningTime="2025-12-09 11:45:34.312826526 +0000 UTC m=+1116.852710862" watchObservedRunningTime="2025-12-09 11:45:34.315614787 +0000 UTC m=+1116.855499103" Dec 09 11:45:34 crc kubenswrapper[4849]: I1209 11:45:34.806613 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:34 crc kubenswrapper[4849]: I1209 11:45:34.969624 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.026120 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-x849b"] Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.304527 4849 generic.go:334] "Generic (PLEG): container finished" podID="574c9a8a-6aaf-4344-b566-039bf65b788d" containerID="9798a1b65eb5da225b3eb51124d33ddd1322fb66b128ed21cf754f78a7dead16" exitCode=0 Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.304631 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"574c9a8a-6aaf-4344-b566-039bf65b788d","Type":"ContainerDied","Data":"9798a1b65eb5da225b3eb51124d33ddd1322fb66b128ed21cf754f78a7dead16"} Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.304754 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-x849b" podUID="638f2f6c-bb21-46d2-9232-417f74400264" containerName="dnsmasq-dns" containerID="cri-o://e633ad148debfefa66d29d634c0554e8b3e356dc3930473e60233d31169b7986" gracePeriod=10 Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.726643 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.881332 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-ovsdbserver-nb\") pod \"638f2f6c-bb21-46d2-9232-417f74400264\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.881675 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-config\") pod \"638f2f6c-bb21-46d2-9232-417f74400264\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.881695 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-dns-svc\") pod \"638f2f6c-bb21-46d2-9232-417f74400264\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.881797 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c7n4\" (UniqueName: \"kubernetes.io/projected/638f2f6c-bb21-46d2-9232-417f74400264-kube-api-access-4c7n4\") pod \"638f2f6c-bb21-46d2-9232-417f74400264\" (UID: \"638f2f6c-bb21-46d2-9232-417f74400264\") " Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.889354 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638f2f6c-bb21-46d2-9232-417f74400264-kube-api-access-4c7n4" (OuterVolumeSpecName: "kube-api-access-4c7n4") pod "638f2f6c-bb21-46d2-9232-417f74400264" (UID: "638f2f6c-bb21-46d2-9232-417f74400264"). InnerVolumeSpecName "kube-api-access-4c7n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.923498 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-config" (OuterVolumeSpecName: "config") pod "638f2f6c-bb21-46d2-9232-417f74400264" (UID: "638f2f6c-bb21-46d2-9232-417f74400264"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.923510 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "638f2f6c-bb21-46d2-9232-417f74400264" (UID: "638f2f6c-bb21-46d2-9232-417f74400264"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.925893 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "638f2f6c-bb21-46d2-9232-417f74400264" (UID: "638f2f6c-bb21-46d2-9232-417f74400264"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.983724 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.983810 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.983820 4849 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/638f2f6c-bb21-46d2-9232-417f74400264-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:35 crc kubenswrapper[4849]: I1209 11:45:35.983832 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c7n4\" (UniqueName: \"kubernetes.io/projected/638f2f6c-bb21-46d2-9232-417f74400264-kube-api-access-4c7n4\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.272450 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.272731 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.315013 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"574c9a8a-6aaf-4344-b566-039bf65b788d","Type":"ContainerStarted","Data":"6268d37b55784570d6c002568d0b44bf5f920538e5065d299d4a8d70caa436ef"} Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.317602 4849 generic.go:334] "Generic (PLEG): container finished" podID="638f2f6c-bb21-46d2-9232-417f74400264" containerID="e633ad148debfefa66d29d634c0554e8b3e356dc3930473e60233d31169b7986" exitCode=0 Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.317680 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-x849b" Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.317689 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-x849b" event={"ID":"638f2f6c-bb21-46d2-9232-417f74400264","Type":"ContainerDied","Data":"e633ad148debfefa66d29d634c0554e8b3e356dc3930473e60233d31169b7986"} Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.318008 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-x849b" event={"ID":"638f2f6c-bb21-46d2-9232-417f74400264","Type":"ContainerDied","Data":"e432b9f14b2f8a6ec0e4020d23749b72ae3ae1027a299f078f8fcb94ca48b16b"} Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.318109 4849 scope.go:117] "RemoveContainer" containerID="e633ad148debfefa66d29d634c0554e8b3e356dc3930473e60233d31169b7986" Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.346952 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371985.50785 podStartE2EDuration="51.346925031s" podCreationTimestamp="2025-12-09 11:44:45 +0000 UTC" firstStartedPulling="2025-12-09 11:44:48.22328787 +0000 UTC m=+1070.763172186" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:45:36.339826234 +0000 UTC m=+1118.879710550" watchObservedRunningTime="2025-12-09 11:45:36.346925031 +0000 UTC m=+1118.886809347" Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.371136 4849 scope.go:117] "RemoveContainer" containerID="3cd471b6947a4067c9f75e12194bea00f5931b012c5dd446d5e8b83fe37d5524" Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.383958 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-x849b"] Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.389752 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-x849b"] Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.427833 4849 scope.go:117] "RemoveContainer" containerID="e633ad148debfefa66d29d634c0554e8b3e356dc3930473e60233d31169b7986" Dec 09 11:45:36 crc kubenswrapper[4849]: E1209 11:45:36.428216 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e633ad148debfefa66d29d634c0554e8b3e356dc3930473e60233d31169b7986\": container with ID starting with e633ad148debfefa66d29d634c0554e8b3e356dc3930473e60233d31169b7986 not found: ID does not exist" containerID="e633ad148debfefa66d29d634c0554e8b3e356dc3930473e60233d31169b7986" Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.428250 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e633ad148debfefa66d29d634c0554e8b3e356dc3930473e60233d31169b7986"} err="failed to get container status \"e633ad148debfefa66d29d634c0554e8b3e356dc3930473e60233d31169b7986\": rpc error: code = NotFound desc = could not find container \"e633ad148debfefa66d29d634c0554e8b3e356dc3930473e60233d31169b7986\": container with ID starting with e633ad148debfefa66d29d634c0554e8b3e356dc3930473e60233d31169b7986 not found: ID does not exist" Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.428277 4849 scope.go:117] "RemoveContainer" containerID="3cd471b6947a4067c9f75e12194bea00f5931b012c5dd446d5e8b83fe37d5524" Dec 09 11:45:36 crc kubenswrapper[4849]: E1209 11:45:36.428561 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cd471b6947a4067c9f75e12194bea00f5931b012c5dd446d5e8b83fe37d5524\": container with ID starting with 3cd471b6947a4067c9f75e12194bea00f5931b012c5dd446d5e8b83fe37d5524 not found: ID does not exist" containerID="3cd471b6947a4067c9f75e12194bea00f5931b012c5dd446d5e8b83fe37d5524" Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.428586 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cd471b6947a4067c9f75e12194bea00f5931b012c5dd446d5e8b83fe37d5524"} err="failed to get container status \"3cd471b6947a4067c9f75e12194bea00f5931b012c5dd446d5e8b83fe37d5524\": rpc error: code = NotFound desc = could not find container \"3cd471b6947a4067c9f75e12194bea00f5931b012c5dd446d5e8b83fe37d5524\": container with ID starting with 3cd471b6947a4067c9f75e12194bea00f5931b012c5dd446d5e8b83fe37d5524 not found: ID does not exist" Dec 09 11:45:36 crc kubenswrapper[4849]: I1209 11:45:36.550837 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="638f2f6c-bb21-46d2-9232-417f74400264" path="/var/lib/kubelet/pods/638f2f6c-bb21-46d2-9232-417f74400264/volumes" Dec 09 11:45:37 crc kubenswrapper[4849]: I1209 11:45:37.351973 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 09 11:45:37 crc kubenswrapper[4849]: I1209 11:45:37.352315 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 09 11:45:40 crc kubenswrapper[4849]: I1209 11:45:40.357432 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 09 11:45:40 crc kubenswrapper[4849]: I1209 11:45:40.430631 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 09 11:45:41 crc kubenswrapper[4849]: I1209 11:45:41.437680 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 09 11:45:41 crc kubenswrapper[4849]: I1209 11:45:41.518103 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.115626 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-fbwzp"] Dec 09 11:45:43 crc kubenswrapper[4849]: E1209 11:45:43.116432 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638f2f6c-bb21-46d2-9232-417f74400264" containerName="init" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.116450 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="638f2f6c-bb21-46d2-9232-417f74400264" containerName="init" Dec 09 11:45:43 crc kubenswrapper[4849]: E1209 11:45:43.116469 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638f2f6c-bb21-46d2-9232-417f74400264" containerName="dnsmasq-dns" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.116476 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="638f2f6c-bb21-46d2-9232-417f74400264" containerName="dnsmasq-dns" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.116695 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="638f2f6c-bb21-46d2-9232-417f74400264" containerName="dnsmasq-dns" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.117397 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fbwzp" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.127331 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fbwzp"] Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.198735 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25daca92-ae6e-4c61-9352-5f84ab7c37ef-operator-scripts\") pod \"glance-db-create-fbwzp\" (UID: \"25daca92-ae6e-4c61-9352-5f84ab7c37ef\") " pod="openstack/glance-db-create-fbwzp" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.199055 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck2lg\" (UniqueName: \"kubernetes.io/projected/25daca92-ae6e-4c61-9352-5f84ab7c37ef-kube-api-access-ck2lg\") pod \"glance-db-create-fbwzp\" (UID: \"25daca92-ae6e-4c61-9352-5f84ab7c37ef\") " pod="openstack/glance-db-create-fbwzp" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.258487 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a6c1-account-create-update-j9xp8"] Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.261190 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6c1-account-create-update-j9xp8" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.264264 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.282656 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a6c1-account-create-update-j9xp8"] Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.301210 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25daca92-ae6e-4c61-9352-5f84ab7c37ef-operator-scripts\") pod \"glance-db-create-fbwzp\" (UID: \"25daca92-ae6e-4c61-9352-5f84ab7c37ef\") " pod="openstack/glance-db-create-fbwzp" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.301307 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck2lg\" (UniqueName: \"kubernetes.io/projected/25daca92-ae6e-4c61-9352-5f84ab7c37ef-kube-api-access-ck2lg\") pod \"glance-db-create-fbwzp\" (UID: \"25daca92-ae6e-4c61-9352-5f84ab7c37ef\") " pod="openstack/glance-db-create-fbwzp" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.302646 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25daca92-ae6e-4c61-9352-5f84ab7c37ef-operator-scripts\") pod \"glance-db-create-fbwzp\" (UID: \"25daca92-ae6e-4c61-9352-5f84ab7c37ef\") " pod="openstack/glance-db-create-fbwzp" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.320035 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck2lg\" (UniqueName: \"kubernetes.io/projected/25daca92-ae6e-4c61-9352-5f84ab7c37ef-kube-api-access-ck2lg\") pod \"glance-db-create-fbwzp\" (UID: \"25daca92-ae6e-4c61-9352-5f84ab7c37ef\") " pod="openstack/glance-db-create-fbwzp" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.403275 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96fa319a-9e59-492f-9ca8-bd4277eec701-operator-scripts\") pod \"glance-a6c1-account-create-update-j9xp8\" (UID: \"96fa319a-9e59-492f-9ca8-bd4277eec701\") " pod="openstack/glance-a6c1-account-create-update-j9xp8" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.403913 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7gtn\" (UniqueName: \"kubernetes.io/projected/96fa319a-9e59-492f-9ca8-bd4277eec701-kube-api-access-r7gtn\") pod \"glance-a6c1-account-create-update-j9xp8\" (UID: \"96fa319a-9e59-492f-9ca8-bd4277eec701\") " pod="openstack/glance-a6c1-account-create-update-j9xp8" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.437354 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fbwzp" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.521496 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96fa319a-9e59-492f-9ca8-bd4277eec701-operator-scripts\") pod \"glance-a6c1-account-create-update-j9xp8\" (UID: \"96fa319a-9e59-492f-9ca8-bd4277eec701\") " pod="openstack/glance-a6c1-account-create-update-j9xp8" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.521588 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7gtn\" (UniqueName: \"kubernetes.io/projected/96fa319a-9e59-492f-9ca8-bd4277eec701-kube-api-access-r7gtn\") pod \"glance-a6c1-account-create-update-j9xp8\" (UID: \"96fa319a-9e59-492f-9ca8-bd4277eec701\") " pod="openstack/glance-a6c1-account-create-update-j9xp8" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.522866 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96fa319a-9e59-492f-9ca8-bd4277eec701-operator-scripts\") pod \"glance-a6c1-account-create-update-j9xp8\" (UID: \"96fa319a-9e59-492f-9ca8-bd4277eec701\") " pod="openstack/glance-a6c1-account-create-update-j9xp8" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.558322 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7gtn\" (UniqueName: \"kubernetes.io/projected/96fa319a-9e59-492f-9ca8-bd4277eec701-kube-api-access-r7gtn\") pod \"glance-a6c1-account-create-update-j9xp8\" (UID: \"96fa319a-9e59-492f-9ca8-bd4277eec701\") " pod="openstack/glance-a6c1-account-create-update-j9xp8" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.596822 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6c1-account-create-update-j9xp8" Dec 09 11:45:43 crc kubenswrapper[4849]: I1209 11:45:43.683576 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fbwzp"] Dec 09 11:45:43 crc kubenswrapper[4849]: W1209 11:45:43.687208 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25daca92_ae6e_4c61_9352_5f84ab7c37ef.slice/crio-a1001ac7700f32b50c6a0314737d394e67ef08b07553f35563671df78de53aa7 WatchSource:0}: Error finding container a1001ac7700f32b50c6a0314737d394e67ef08b07553f35563671df78de53aa7: Status 404 returned error can't find the container with id a1001ac7700f32b50c6a0314737d394e67ef08b07553f35563671df78de53aa7 Dec 09 11:45:44 crc kubenswrapper[4849]: I1209 11:45:44.086513 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a6c1-account-create-update-j9xp8"] Dec 09 11:45:44 crc kubenswrapper[4849]: W1209 11:45:44.091312 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96fa319a_9e59_492f_9ca8_bd4277eec701.slice/crio-1d5b483880e36b74c0bee42f3c4cda53c4057942633cec4f068d352be976cb18 WatchSource:0}: Error finding container 1d5b483880e36b74c0bee42f3c4cda53c4057942633cec4f068d352be976cb18: Status 404 returned error can't find the container with id 1d5b483880e36b74c0bee42f3c4cda53c4057942633cec4f068d352be976cb18 Dec 09 11:45:44 crc kubenswrapper[4849]: I1209 11:45:44.153335 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 09 11:45:44 crc kubenswrapper[4849]: I1209 11:45:44.389234 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6c1-account-create-update-j9xp8" event={"ID":"96fa319a-9e59-492f-9ca8-bd4277eec701","Type":"ContainerStarted","Data":"7a1d9ad0742926ccb19eec63085d4b5a134b57b8b4c36b8d45d79151eafef657"} Dec 09 11:45:44 crc kubenswrapper[4849]: I1209 11:45:44.389280 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6c1-account-create-update-j9xp8" event={"ID":"96fa319a-9e59-492f-9ca8-bd4277eec701","Type":"ContainerStarted","Data":"1d5b483880e36b74c0bee42f3c4cda53c4057942633cec4f068d352be976cb18"} Dec 09 11:45:44 crc kubenswrapper[4849]: I1209 11:45:44.390868 4849 generic.go:334] "Generic (PLEG): container finished" podID="25daca92-ae6e-4c61-9352-5f84ab7c37ef" containerID="220d95d1ccdb1cfd901238d0dcbd6f0f5b6087914e8082f0735185222ed2da20" exitCode=0 Dec 09 11:45:44 crc kubenswrapper[4849]: I1209 11:45:44.390950 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fbwzp" event={"ID":"25daca92-ae6e-4c61-9352-5f84ab7c37ef","Type":"ContainerDied","Data":"220d95d1ccdb1cfd901238d0dcbd6f0f5b6087914e8082f0735185222ed2da20"} Dec 09 11:45:44 crc kubenswrapper[4849]: I1209 11:45:44.391544 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fbwzp" event={"ID":"25daca92-ae6e-4c61-9352-5f84ab7c37ef","Type":"ContainerStarted","Data":"a1001ac7700f32b50c6a0314737d394e67ef08b07553f35563671df78de53aa7"} Dec 09 11:45:44 crc kubenswrapper[4849]: I1209 11:45:44.404538 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-a6c1-account-create-update-j9xp8" podStartSLOduration=1.404521561 podStartE2EDuration="1.404521561s" podCreationTimestamp="2025-12-09 11:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:45:44.404316457 +0000 UTC m=+1126.944200793" watchObservedRunningTime="2025-12-09 11:45:44.404521561 +0000 UTC m=+1126.944405887" Dec 09 11:45:45 crc kubenswrapper[4849]: I1209 11:45:45.401914 4849 generic.go:334] "Generic (PLEG): container finished" podID="96fa319a-9e59-492f-9ca8-bd4277eec701" containerID="7a1d9ad0742926ccb19eec63085d4b5a134b57b8b4c36b8d45d79151eafef657" exitCode=0 Dec 09 11:45:45 crc kubenswrapper[4849]: I1209 11:45:45.402000 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6c1-account-create-update-j9xp8" event={"ID":"96fa319a-9e59-492f-9ca8-bd4277eec701","Type":"ContainerDied","Data":"7a1d9ad0742926ccb19eec63085d4b5a134b57b8b4c36b8d45d79151eafef657"} Dec 09 11:45:45 crc kubenswrapper[4849]: I1209 11:45:45.675706 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fbwzp" Dec 09 11:45:45 crc kubenswrapper[4849]: I1209 11:45:45.762578 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck2lg\" (UniqueName: \"kubernetes.io/projected/25daca92-ae6e-4c61-9352-5f84ab7c37ef-kube-api-access-ck2lg\") pod \"25daca92-ae6e-4c61-9352-5f84ab7c37ef\" (UID: \"25daca92-ae6e-4c61-9352-5f84ab7c37ef\") " Dec 09 11:45:45 crc kubenswrapper[4849]: I1209 11:45:45.762820 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25daca92-ae6e-4c61-9352-5f84ab7c37ef-operator-scripts\") pod \"25daca92-ae6e-4c61-9352-5f84ab7c37ef\" (UID: \"25daca92-ae6e-4c61-9352-5f84ab7c37ef\") " Dec 09 11:45:45 crc kubenswrapper[4849]: I1209 11:45:45.763512 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25daca92-ae6e-4c61-9352-5f84ab7c37ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25daca92-ae6e-4c61-9352-5f84ab7c37ef" (UID: "25daca92-ae6e-4c61-9352-5f84ab7c37ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:45 crc kubenswrapper[4849]: I1209 11:45:45.768232 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25daca92-ae6e-4c61-9352-5f84ab7c37ef-kube-api-access-ck2lg" (OuterVolumeSpecName: "kube-api-access-ck2lg") pod "25daca92-ae6e-4c61-9352-5f84ab7c37ef" (UID: "25daca92-ae6e-4c61-9352-5f84ab7c37ef"). InnerVolumeSpecName "kube-api-access-ck2lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:45:45 crc kubenswrapper[4849]: I1209 11:45:45.864363 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25daca92-ae6e-4c61-9352-5f84ab7c37ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:45 crc kubenswrapper[4849]: I1209 11:45:45.864710 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck2lg\" (UniqueName: \"kubernetes.io/projected/25daca92-ae6e-4c61-9352-5f84ab7c37ef-kube-api-access-ck2lg\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:46 crc kubenswrapper[4849]: I1209 11:45:46.410079 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fbwzp" Dec 09 11:45:46 crc kubenswrapper[4849]: I1209 11:45:46.410134 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fbwzp" event={"ID":"25daca92-ae6e-4c61-9352-5f84ab7c37ef","Type":"ContainerDied","Data":"a1001ac7700f32b50c6a0314737d394e67ef08b07553f35563671df78de53aa7"} Dec 09 11:45:46 crc kubenswrapper[4849]: I1209 11:45:46.410790 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1001ac7700f32b50c6a0314737d394e67ef08b07553f35563671df78de53aa7" Dec 09 11:45:46 crc kubenswrapper[4849]: I1209 11:45:46.687891 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6c1-account-create-update-j9xp8" Dec 09 11:45:46 crc kubenswrapper[4849]: I1209 11:45:46.778972 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7gtn\" (UniqueName: \"kubernetes.io/projected/96fa319a-9e59-492f-9ca8-bd4277eec701-kube-api-access-r7gtn\") pod \"96fa319a-9e59-492f-9ca8-bd4277eec701\" (UID: \"96fa319a-9e59-492f-9ca8-bd4277eec701\") " Dec 09 11:45:46 crc kubenswrapper[4849]: I1209 11:45:46.779098 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96fa319a-9e59-492f-9ca8-bd4277eec701-operator-scripts\") pod \"96fa319a-9e59-492f-9ca8-bd4277eec701\" (UID: \"96fa319a-9e59-492f-9ca8-bd4277eec701\") " Dec 09 11:45:46 crc kubenswrapper[4849]: I1209 11:45:46.779932 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96fa319a-9e59-492f-9ca8-bd4277eec701-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96fa319a-9e59-492f-9ca8-bd4277eec701" (UID: "96fa319a-9e59-492f-9ca8-bd4277eec701"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:46 crc kubenswrapper[4849]: I1209 11:45:46.785658 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96fa319a-9e59-492f-9ca8-bd4277eec701-kube-api-access-r7gtn" (OuterVolumeSpecName: "kube-api-access-r7gtn") pod "96fa319a-9e59-492f-9ca8-bd4277eec701" (UID: "96fa319a-9e59-492f-9ca8-bd4277eec701"). InnerVolumeSpecName "kube-api-access-r7gtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:45:46 crc kubenswrapper[4849]: I1209 11:45:46.881437 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96fa319a-9e59-492f-9ca8-bd4277eec701-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:46 crc kubenswrapper[4849]: I1209 11:45:46.881489 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7gtn\" (UniqueName: \"kubernetes.io/projected/96fa319a-9e59-492f-9ca8-bd4277eec701-kube-api-access-r7gtn\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.331419 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mf4zj"] Dec 09 11:45:47 crc kubenswrapper[4849]: E1209 11:45:47.332069 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25daca92-ae6e-4c61-9352-5f84ab7c37ef" containerName="mariadb-database-create" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.332085 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="25daca92-ae6e-4c61-9352-5f84ab7c37ef" containerName="mariadb-database-create" Dec 09 11:45:47 crc kubenswrapper[4849]: E1209 11:45:47.332107 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96fa319a-9e59-492f-9ca8-bd4277eec701" containerName="mariadb-account-create-update" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.332114 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="96fa319a-9e59-492f-9ca8-bd4277eec701" containerName="mariadb-account-create-update" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.332253 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="25daca92-ae6e-4c61-9352-5f84ab7c37ef" containerName="mariadb-database-create" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.332276 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="96fa319a-9e59-492f-9ca8-bd4277eec701" containerName="mariadb-account-create-update" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.332814 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mf4zj" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.345329 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mf4zj"] Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.417895 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6c1-account-create-update-j9xp8" event={"ID":"96fa319a-9e59-492f-9ca8-bd4277eec701","Type":"ContainerDied","Data":"1d5b483880e36b74c0bee42f3c4cda53c4057942633cec4f068d352be976cb18"} Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.417940 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d5b483880e36b74c0bee42f3c4cda53c4057942633cec4f068d352be976cb18" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.417944 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6c1-account-create-update-j9xp8" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.440096 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-60d1-account-create-update-qlzbg"] Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.441134 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-60d1-account-create-update-qlzbg" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.442713 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.452603 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-60d1-account-create-update-qlzbg"] Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.498261 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbtcg\" (UniqueName: \"kubernetes.io/projected/5e9bdc85-b967-4475-a94b-4f4fa6e74c5b-kube-api-access-xbtcg\") pod \"keystone-db-create-mf4zj\" (UID: \"5e9bdc85-b967-4475-a94b-4f4fa6e74c5b\") " pod="openstack/keystone-db-create-mf4zj" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.498342 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e9bdc85-b967-4475-a94b-4f4fa6e74c5b-operator-scripts\") pod \"keystone-db-create-mf4zj\" (UID: \"5e9bdc85-b967-4475-a94b-4f4fa6e74c5b\") " pod="openstack/keystone-db-create-mf4zj" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.600192 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hknhq\" (UniqueName: \"kubernetes.io/projected/f6b3e656-264c-42c0-afd9-26b87f3b208e-kube-api-access-hknhq\") pod \"keystone-60d1-account-create-update-qlzbg\" (UID: \"f6b3e656-264c-42c0-afd9-26b87f3b208e\") " pod="openstack/keystone-60d1-account-create-update-qlzbg" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.600306 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b3e656-264c-42c0-afd9-26b87f3b208e-operator-scripts\") pod \"keystone-60d1-account-create-update-qlzbg\" (UID: \"f6b3e656-264c-42c0-afd9-26b87f3b208e\") " pod="openstack/keystone-60d1-account-create-update-qlzbg" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.600330 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbtcg\" (UniqueName: \"kubernetes.io/projected/5e9bdc85-b967-4475-a94b-4f4fa6e74c5b-kube-api-access-xbtcg\") pod \"keystone-db-create-mf4zj\" (UID: \"5e9bdc85-b967-4475-a94b-4f4fa6e74c5b\") " pod="openstack/keystone-db-create-mf4zj" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.600381 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e9bdc85-b967-4475-a94b-4f4fa6e74c5b-operator-scripts\") pod \"keystone-db-create-mf4zj\" (UID: \"5e9bdc85-b967-4475-a94b-4f4fa6e74c5b\") " pod="openstack/keystone-db-create-mf4zj" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.601219 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e9bdc85-b967-4475-a94b-4f4fa6e74c5b-operator-scripts\") pod \"keystone-db-create-mf4zj\" (UID: \"5e9bdc85-b967-4475-a94b-4f4fa6e74c5b\") " pod="openstack/keystone-db-create-mf4zj" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.622707 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbtcg\" (UniqueName: \"kubernetes.io/projected/5e9bdc85-b967-4475-a94b-4f4fa6e74c5b-kube-api-access-xbtcg\") pod \"keystone-db-create-mf4zj\" (UID: \"5e9bdc85-b967-4475-a94b-4f4fa6e74c5b\") " pod="openstack/keystone-db-create-mf4zj" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.652244 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mf4zj" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.701742 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b3e656-264c-42c0-afd9-26b87f3b208e-operator-scripts\") pod \"keystone-60d1-account-create-update-qlzbg\" (UID: \"f6b3e656-264c-42c0-afd9-26b87f3b208e\") " pod="openstack/keystone-60d1-account-create-update-qlzbg" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.701872 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hknhq\" (UniqueName: \"kubernetes.io/projected/f6b3e656-264c-42c0-afd9-26b87f3b208e-kube-api-access-hknhq\") pod \"keystone-60d1-account-create-update-qlzbg\" (UID: \"f6b3e656-264c-42c0-afd9-26b87f3b208e\") " pod="openstack/keystone-60d1-account-create-update-qlzbg" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.702602 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b3e656-264c-42c0-afd9-26b87f3b208e-operator-scripts\") pod \"keystone-60d1-account-create-update-qlzbg\" (UID: \"f6b3e656-264c-42c0-afd9-26b87f3b208e\") " pod="openstack/keystone-60d1-account-create-update-qlzbg" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.725312 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hknhq\" (UniqueName: \"kubernetes.io/projected/f6b3e656-264c-42c0-afd9-26b87f3b208e-kube-api-access-hknhq\") pod \"keystone-60d1-account-create-update-qlzbg\" (UID: \"f6b3e656-264c-42c0-afd9-26b87f3b208e\") " pod="openstack/keystone-60d1-account-create-update-qlzbg" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.773630 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-60d1-account-create-update-qlzbg" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.788978 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-s5889"] Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.791378 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s5889" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.817024 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-s5889"] Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.898590 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ebfc-account-create-update-2mdbh"] Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.899817 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ebfc-account-create-update-2mdbh" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.901766 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.912449 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ebfc-account-create-update-2mdbh"] Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.920677 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0feb1b2-1589-42f6-824d-431ae417ce09-operator-scripts\") pod \"placement-db-create-s5889\" (UID: \"e0feb1b2-1589-42f6-824d-431ae417ce09\") " pod="openstack/placement-db-create-s5889" Dec 09 11:45:47 crc kubenswrapper[4849]: I1209 11:45:47.920752 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwzvj\" (UniqueName: \"kubernetes.io/projected/e0feb1b2-1589-42f6-824d-431ae417ce09-kube-api-access-pwzvj\") pod \"placement-db-create-s5889\" (UID: \"e0feb1b2-1589-42f6-824d-431ae417ce09\") " pod="openstack/placement-db-create-s5889" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.022452 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0feb1b2-1589-42f6-824d-431ae417ce09-operator-scripts\") pod \"placement-db-create-s5889\" (UID: \"e0feb1b2-1589-42f6-824d-431ae417ce09\") " pod="openstack/placement-db-create-s5889" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.022975 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95ad6f6-4e91-4d3d-b456-36e5a1cc5969-operator-scripts\") pod \"placement-ebfc-account-create-update-2mdbh\" (UID: \"e95ad6f6-4e91-4d3d-b456-36e5a1cc5969\") " pod="openstack/placement-ebfc-account-create-update-2mdbh" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.022999 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbpfr\" (UniqueName: \"kubernetes.io/projected/e95ad6f6-4e91-4d3d-b456-36e5a1cc5969-kube-api-access-tbpfr\") pod \"placement-ebfc-account-create-update-2mdbh\" (UID: \"e95ad6f6-4e91-4d3d-b456-36e5a1cc5969\") " pod="openstack/placement-ebfc-account-create-update-2mdbh" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.023020 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwzvj\" (UniqueName: \"kubernetes.io/projected/e0feb1b2-1589-42f6-824d-431ae417ce09-kube-api-access-pwzvj\") pod \"placement-db-create-s5889\" (UID: \"e0feb1b2-1589-42f6-824d-431ae417ce09\") " pod="openstack/placement-db-create-s5889" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.027670 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0feb1b2-1589-42f6-824d-431ae417ce09-operator-scripts\") pod \"placement-db-create-s5889\" (UID: \"e0feb1b2-1589-42f6-824d-431ae417ce09\") " pod="openstack/placement-db-create-s5889" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.046147 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwzvj\" (UniqueName: \"kubernetes.io/projected/e0feb1b2-1589-42f6-824d-431ae417ce09-kube-api-access-pwzvj\") pod \"placement-db-create-s5889\" (UID: \"e0feb1b2-1589-42f6-824d-431ae417ce09\") " pod="openstack/placement-db-create-s5889" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.124251 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95ad6f6-4e91-4d3d-b456-36e5a1cc5969-operator-scripts\") pod \"placement-ebfc-account-create-update-2mdbh\" (UID: \"e95ad6f6-4e91-4d3d-b456-36e5a1cc5969\") " pod="openstack/placement-ebfc-account-create-update-2mdbh" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.124301 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbpfr\" (UniqueName: \"kubernetes.io/projected/e95ad6f6-4e91-4d3d-b456-36e5a1cc5969-kube-api-access-tbpfr\") pod \"placement-ebfc-account-create-update-2mdbh\" (UID: \"e95ad6f6-4e91-4d3d-b456-36e5a1cc5969\") " pod="openstack/placement-ebfc-account-create-update-2mdbh" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.126110 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95ad6f6-4e91-4d3d-b456-36e5a1cc5969-operator-scripts\") pod \"placement-ebfc-account-create-update-2mdbh\" (UID: \"e95ad6f6-4e91-4d3d-b456-36e5a1cc5969\") " pod="openstack/placement-ebfc-account-create-update-2mdbh" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.126324 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s5889" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.146075 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbpfr\" (UniqueName: \"kubernetes.io/projected/e95ad6f6-4e91-4d3d-b456-36e5a1cc5969-kube-api-access-tbpfr\") pod \"placement-ebfc-account-create-update-2mdbh\" (UID: \"e95ad6f6-4e91-4d3d-b456-36e5a1cc5969\") " pod="openstack/placement-ebfc-account-create-update-2mdbh" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.157512 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mf4zj"] Dec 09 11:45:48 crc kubenswrapper[4849]: W1209 11:45:48.171786 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e9bdc85_b967_4475_a94b_4f4fa6e74c5b.slice/crio-ac71f9325a00e6f7b2bb9e2e0f956c51789fc6681587789cf99882bae8ebd094 WatchSource:0}: Error finding container ac71f9325a00e6f7b2bb9e2e0f956c51789fc6681587789cf99882bae8ebd094: Status 404 returned error can't find the container with id ac71f9325a00e6f7b2bb9e2e0f956c51789fc6681587789cf99882bae8ebd094 Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.220127 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ebfc-account-create-update-2mdbh" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.300770 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-60d1-account-create-update-qlzbg"] Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.434548 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-60d1-account-create-update-qlzbg" event={"ID":"f6b3e656-264c-42c0-afd9-26b87f3b208e","Type":"ContainerStarted","Data":"b6c19a9c20da9f0146474999090f53f4bf81bfbbc8edfc994d013fe4f43eaade"} Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.436177 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mf4zj" event={"ID":"5e9bdc85-b967-4475-a94b-4f4fa6e74c5b","Type":"ContainerStarted","Data":"ac71f9325a00e6f7b2bb9e2e0f956c51789fc6681587789cf99882bae8ebd094"} Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.655259 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-w67cr"] Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.657115 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-w67cr"] Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.657214 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-w67cr" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.667735 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.668272 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tfl2s" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.680009 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-s5889"] Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.744914 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mssd6\" (UniqueName: \"kubernetes.io/projected/6c21ca38-f4ba-44cb-99db-914844f473d0-kube-api-access-mssd6\") pod \"glance-db-sync-w67cr\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " pod="openstack/glance-db-sync-w67cr" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.745047 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-config-data\") pod \"glance-db-sync-w67cr\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " pod="openstack/glance-db-sync-w67cr" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.745091 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-db-sync-config-data\") pod \"glance-db-sync-w67cr\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " pod="openstack/glance-db-sync-w67cr" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.745114 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-combined-ca-bundle\") pod \"glance-db-sync-w67cr\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " pod="openstack/glance-db-sync-w67cr" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.774617 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ebfc-account-create-update-2mdbh"] Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.846083 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-config-data\") pod \"glance-db-sync-w67cr\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " pod="openstack/glance-db-sync-w67cr" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.846147 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-db-sync-config-data\") pod \"glance-db-sync-w67cr\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " pod="openstack/glance-db-sync-w67cr" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.846168 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-combined-ca-bundle\") pod \"glance-db-sync-w67cr\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " pod="openstack/glance-db-sync-w67cr" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.846225 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mssd6\" (UniqueName: \"kubernetes.io/projected/6c21ca38-f4ba-44cb-99db-914844f473d0-kube-api-access-mssd6\") pod \"glance-db-sync-w67cr\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " pod="openstack/glance-db-sync-w67cr" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.861617 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-db-sync-config-data\") pod \"glance-db-sync-w67cr\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " pod="openstack/glance-db-sync-w67cr" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.862588 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-config-data\") pod \"glance-db-sync-w67cr\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " pod="openstack/glance-db-sync-w67cr" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.863359 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-combined-ca-bundle\") pod \"glance-db-sync-w67cr\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " pod="openstack/glance-db-sync-w67cr" Dec 09 11:45:48 crc kubenswrapper[4849]: I1209 11:45:48.866019 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mssd6\" (UniqueName: \"kubernetes.io/projected/6c21ca38-f4ba-44cb-99db-914844f473d0-kube-api-access-mssd6\") pod \"glance-db-sync-w67cr\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " pod="openstack/glance-db-sync-w67cr" Dec 09 11:45:49 crc kubenswrapper[4849]: I1209 11:45:49.055087 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-w67cr" Dec 09 11:45:49 crc kubenswrapper[4849]: I1209 11:45:49.447962 4849 generic.go:334] "Generic (PLEG): container finished" podID="5e9bdc85-b967-4475-a94b-4f4fa6e74c5b" containerID="46a742bd8a87e91acfafba87ef4385d9741a761b22a02991b9a4dcdb3aa94a00" exitCode=0 Dec 09 11:45:49 crc kubenswrapper[4849]: I1209 11:45:49.448162 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mf4zj" event={"ID":"5e9bdc85-b967-4475-a94b-4f4fa6e74c5b","Type":"ContainerDied","Data":"46a742bd8a87e91acfafba87ef4385d9741a761b22a02991b9a4dcdb3aa94a00"} Dec 09 11:45:49 crc kubenswrapper[4849]: I1209 11:45:49.450257 4849 generic.go:334] "Generic (PLEG): container finished" podID="e95ad6f6-4e91-4d3d-b456-36e5a1cc5969" containerID="76687cbacdb8f3ed258e74330e46e339a82bb41ac82ec7757c7f79e280017e0e" exitCode=0 Dec 09 11:45:49 crc kubenswrapper[4849]: I1209 11:45:49.450308 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ebfc-account-create-update-2mdbh" event={"ID":"e95ad6f6-4e91-4d3d-b456-36e5a1cc5969","Type":"ContainerDied","Data":"76687cbacdb8f3ed258e74330e46e339a82bb41ac82ec7757c7f79e280017e0e"} Dec 09 11:45:49 crc kubenswrapper[4849]: I1209 11:45:49.450327 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ebfc-account-create-update-2mdbh" event={"ID":"e95ad6f6-4e91-4d3d-b456-36e5a1cc5969","Type":"ContainerStarted","Data":"7077d3bd7681712bf7238d05ca4ee26c04bb865899b128e3f6c5dc84cef501f6"} Dec 09 11:45:49 crc kubenswrapper[4849]: I1209 11:45:49.451772 4849 generic.go:334] "Generic (PLEG): container finished" podID="f6b3e656-264c-42c0-afd9-26b87f3b208e" containerID="6a07e4fb7c4ec449518866150dae45465fee5533aadb050bb78300745c0deee5" exitCode=0 Dec 09 11:45:49 crc kubenswrapper[4849]: I1209 11:45:49.451816 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-60d1-account-create-update-qlzbg" event={"ID":"f6b3e656-264c-42c0-afd9-26b87f3b208e","Type":"ContainerDied","Data":"6a07e4fb7c4ec449518866150dae45465fee5533aadb050bb78300745c0deee5"} Dec 09 11:45:49 crc kubenswrapper[4849]: I1209 11:45:49.455359 4849 generic.go:334] "Generic (PLEG): container finished" podID="e0feb1b2-1589-42f6-824d-431ae417ce09" containerID="e60a9d5f87697469a492827b7403550f3b885f632095e3989e501ee48e90fa97" exitCode=0 Dec 09 11:45:49 crc kubenswrapper[4849]: I1209 11:45:49.455387 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s5889" event={"ID":"e0feb1b2-1589-42f6-824d-431ae417ce09","Type":"ContainerDied","Data":"e60a9d5f87697469a492827b7403550f3b885f632095e3989e501ee48e90fa97"} Dec 09 11:45:49 crc kubenswrapper[4849]: I1209 11:45:49.455416 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s5889" event={"ID":"e0feb1b2-1589-42f6-824d-431ae417ce09","Type":"ContainerStarted","Data":"b830401eafc166a8a59fe521df15c4555bfb9b2a0f62b3e06c26a7256a054b82"} Dec 09 11:45:49 crc kubenswrapper[4849]: I1209 11:45:49.637521 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-w67cr"] Dec 09 11:45:49 crc kubenswrapper[4849]: W1209 11:45:49.640601 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c21ca38_f4ba_44cb_99db_914844f473d0.slice/crio-8ce35d26dff1a1e5a46753fe44c429b9c08cef4c281aeb2b62669ad46f7fc9bd WatchSource:0}: Error finding container 8ce35d26dff1a1e5a46753fe44c429b9c08cef4c281aeb2b62669ad46f7fc9bd: Status 404 returned error can't find the container with id 8ce35d26dff1a1e5a46753fe44c429b9c08cef4c281aeb2b62669ad46f7fc9bd Dec 09 11:45:50 crc kubenswrapper[4849]: I1209 11:45:50.465952 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-w67cr" event={"ID":"6c21ca38-f4ba-44cb-99db-914844f473d0","Type":"ContainerStarted","Data":"8ce35d26dff1a1e5a46753fe44c429b9c08cef4c281aeb2b62669ad46f7fc9bd"} Dec 09 11:45:50 crc kubenswrapper[4849]: I1209 11:45:50.870188 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ebfc-account-create-update-2mdbh" Dec 09 11:45:50 crc kubenswrapper[4849]: I1209 11:45:50.978512 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95ad6f6-4e91-4d3d-b456-36e5a1cc5969-operator-scripts\") pod \"e95ad6f6-4e91-4d3d-b456-36e5a1cc5969\" (UID: \"e95ad6f6-4e91-4d3d-b456-36e5a1cc5969\") " Dec 09 11:45:50 crc kubenswrapper[4849]: I1209 11:45:50.978660 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbpfr\" (UniqueName: \"kubernetes.io/projected/e95ad6f6-4e91-4d3d-b456-36e5a1cc5969-kube-api-access-tbpfr\") pod \"e95ad6f6-4e91-4d3d-b456-36e5a1cc5969\" (UID: \"e95ad6f6-4e91-4d3d-b456-36e5a1cc5969\") " Dec 09 11:45:50 crc kubenswrapper[4849]: I1209 11:45:50.979257 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e95ad6f6-4e91-4d3d-b456-36e5a1cc5969-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e95ad6f6-4e91-4d3d-b456-36e5a1cc5969" (UID: "e95ad6f6-4e91-4d3d-b456-36e5a1cc5969"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:50 crc kubenswrapper[4849]: I1209 11:45:50.984618 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e95ad6f6-4e91-4d3d-b456-36e5a1cc5969-kube-api-access-tbpfr" (OuterVolumeSpecName: "kube-api-access-tbpfr") pod "e95ad6f6-4e91-4d3d-b456-36e5a1cc5969" (UID: "e95ad6f6-4e91-4d3d-b456-36e5a1cc5969"). InnerVolumeSpecName "kube-api-access-tbpfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:45:50 crc kubenswrapper[4849]: I1209 11:45:50.988260 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s5889" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.048305 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mf4zj" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.059120 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-60d1-account-create-update-qlzbg" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.080896 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hknhq\" (UniqueName: \"kubernetes.io/projected/f6b3e656-264c-42c0-afd9-26b87f3b208e-kube-api-access-hknhq\") pod \"f6b3e656-264c-42c0-afd9-26b87f3b208e\" (UID: \"f6b3e656-264c-42c0-afd9-26b87f3b208e\") " Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.080990 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0feb1b2-1589-42f6-824d-431ae417ce09-operator-scripts\") pod \"e0feb1b2-1589-42f6-824d-431ae417ce09\" (UID: \"e0feb1b2-1589-42f6-824d-431ae417ce09\") " Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.081125 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbtcg\" (UniqueName: \"kubernetes.io/projected/5e9bdc85-b967-4475-a94b-4f4fa6e74c5b-kube-api-access-xbtcg\") pod \"5e9bdc85-b967-4475-a94b-4f4fa6e74c5b\" (UID: \"5e9bdc85-b967-4475-a94b-4f4fa6e74c5b\") " Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.081172 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e9bdc85-b967-4475-a94b-4f4fa6e74c5b-operator-scripts\") pod \"5e9bdc85-b967-4475-a94b-4f4fa6e74c5b\" (UID: \"5e9bdc85-b967-4475-a94b-4f4fa6e74c5b\") " Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.081252 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b3e656-264c-42c0-afd9-26b87f3b208e-operator-scripts\") pod \"f6b3e656-264c-42c0-afd9-26b87f3b208e\" (UID: \"f6b3e656-264c-42c0-afd9-26b87f3b208e\") " Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.081324 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwzvj\" (UniqueName: \"kubernetes.io/projected/e0feb1b2-1589-42f6-824d-431ae417ce09-kube-api-access-pwzvj\") pod \"e0feb1b2-1589-42f6-824d-431ae417ce09\" (UID: \"e0feb1b2-1589-42f6-824d-431ae417ce09\") " Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.081977 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e9bdc85-b967-4475-a94b-4f4fa6e74c5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e9bdc85-b967-4475-a94b-4f4fa6e74c5b" (UID: "5e9bdc85-b967-4475-a94b-4f4fa6e74c5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.082458 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0feb1b2-1589-42f6-824d-431ae417ce09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0feb1b2-1589-42f6-824d-431ae417ce09" (UID: "e0feb1b2-1589-42f6-824d-431ae417ce09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.083296 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95ad6f6-4e91-4d3d-b456-36e5a1cc5969-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.083319 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbpfr\" (UniqueName: \"kubernetes.io/projected/e95ad6f6-4e91-4d3d-b456-36e5a1cc5969-kube-api-access-tbpfr\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.084750 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b3e656-264c-42c0-afd9-26b87f3b208e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6b3e656-264c-42c0-afd9-26b87f3b208e" (UID: "f6b3e656-264c-42c0-afd9-26b87f3b208e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.085612 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b3e656-264c-42c0-afd9-26b87f3b208e-kube-api-access-hknhq" (OuterVolumeSpecName: "kube-api-access-hknhq") pod "f6b3e656-264c-42c0-afd9-26b87f3b208e" (UID: "f6b3e656-264c-42c0-afd9-26b87f3b208e"). InnerVolumeSpecName "kube-api-access-hknhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.088921 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0feb1b2-1589-42f6-824d-431ae417ce09-kube-api-access-pwzvj" (OuterVolumeSpecName: "kube-api-access-pwzvj") pod "e0feb1b2-1589-42f6-824d-431ae417ce09" (UID: "e0feb1b2-1589-42f6-824d-431ae417ce09"). InnerVolumeSpecName "kube-api-access-pwzvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.089041 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e9bdc85-b967-4475-a94b-4f4fa6e74c5b-kube-api-access-xbtcg" (OuterVolumeSpecName: "kube-api-access-xbtcg") pod "5e9bdc85-b967-4475-a94b-4f4fa6e74c5b" (UID: "5e9bdc85-b967-4475-a94b-4f4fa6e74c5b"). InnerVolumeSpecName "kube-api-access-xbtcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.132759 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.132820 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.132868 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.133581 4849 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a2af74fde05e47664890560ba0230403bcc6a0b200101e65907871ade0b4a58"} pod="openshift-machine-config-operator/machine-config-daemon-89kpx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.133648 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" containerID="cri-o://0a2af74fde05e47664890560ba0230403bcc6a0b200101e65907871ade0b4a58" gracePeriod=600 Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.184154 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hknhq\" (UniqueName: \"kubernetes.io/projected/f6b3e656-264c-42c0-afd9-26b87f3b208e-kube-api-access-hknhq\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.184524 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0feb1b2-1589-42f6-824d-431ae417ce09-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.184538 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbtcg\" (UniqueName: \"kubernetes.io/projected/5e9bdc85-b967-4475-a94b-4f4fa6e74c5b-kube-api-access-xbtcg\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.184552 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e9bdc85-b967-4475-a94b-4f4fa6e74c5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.184567 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b3e656-264c-42c0-afd9-26b87f3b208e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.184581 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwzvj\" (UniqueName: \"kubernetes.io/projected/e0feb1b2-1589-42f6-824d-431ae417ce09-kube-api-access-pwzvj\") on node \"crc\" DevicePath \"\"" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.474946 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-60d1-account-create-update-qlzbg" event={"ID":"f6b3e656-264c-42c0-afd9-26b87f3b208e","Type":"ContainerDied","Data":"b6c19a9c20da9f0146474999090f53f4bf81bfbbc8edfc994d013fe4f43eaade"} Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.474983 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6c19a9c20da9f0146474999090f53f4bf81bfbbc8edfc994d013fe4f43eaade" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.475034 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-60d1-account-create-update-qlzbg" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.482719 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s5889" event={"ID":"e0feb1b2-1589-42f6-824d-431ae417ce09","Type":"ContainerDied","Data":"b830401eafc166a8a59fe521df15c4555bfb9b2a0f62b3e06c26a7256a054b82"} Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.482761 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b830401eafc166a8a59fe521df15c4555bfb9b2a0f62b3e06c26a7256a054b82" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.482818 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s5889" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.484125 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mf4zj" event={"ID":"5e9bdc85-b967-4475-a94b-4f4fa6e74c5b","Type":"ContainerDied","Data":"ac71f9325a00e6f7b2bb9e2e0f956c51789fc6681587789cf99882bae8ebd094"} Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.484152 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac71f9325a00e6f7b2bb9e2e0f956c51789fc6681587789cf99882bae8ebd094" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.484164 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mf4zj" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.491778 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ebfc-account-create-update-2mdbh" event={"ID":"e95ad6f6-4e91-4d3d-b456-36e5a1cc5969","Type":"ContainerDied","Data":"7077d3bd7681712bf7238d05ca4ee26c04bb865899b128e3f6c5dc84cef501f6"} Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.491930 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7077d3bd7681712bf7238d05ca4ee26c04bb865899b128e3f6c5dc84cef501f6" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.492312 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ebfc-account-create-update-2mdbh" Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.499882 4849 generic.go:334] "Generic (PLEG): container finished" podID="157c6f6c-042b-4da3-934e-a08474e56486" containerID="0a2af74fde05e47664890560ba0230403bcc6a0b200101e65907871ade0b4a58" exitCode=0 Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.499978 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerDied","Data":"0a2af74fde05e47664890560ba0230403bcc6a0b200101e65907871ade0b4a58"} Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.500014 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerStarted","Data":"b5d0f54890d644c510ae563a6b696f7236880271ca1fac58424d719b2dbb5e99"} Dec 09 11:45:51 crc kubenswrapper[4849]: I1209 11:45:51.500034 4849 scope.go:117] "RemoveContainer" containerID="fb7e27f11d509caaa9ebc587327526354751d200d66bddbb3fd44be26e61d13f" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.285538 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-czrmh" podUID="69a39d69-d705-4246-bc77-cbdd3fadfefa" containerName="ovn-controller" probeResult="failure" output=< Dec 09 11:45:57 crc kubenswrapper[4849]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 09 11:45:57 crc kubenswrapper[4849]: > Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.329464 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.333793 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-chw84" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.561663 4849 generic.go:334] "Generic (PLEG): container finished" podID="9e5432d8-b092-46cd-8aab-cb194ebb23f7" containerID="f14aeff8e9a699d29584826d972fe0d7f09877c465bb6ca7e4904b9f39c56663" exitCode=0 Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.561754 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e5432d8-b092-46cd-8aab-cb194ebb23f7","Type":"ContainerDied","Data":"f14aeff8e9a699d29584826d972fe0d7f09877c465bb6ca7e4904b9f39c56663"} Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.563188 4849 generic.go:334] "Generic (PLEG): container finished" podID="86df3233-1d99-4023-9ff7-55bab063bd7e" containerID="115a468a692fb19ce7f718c4cc8fdb83f4844c63e4e4a649b01cb6f4bca1c956" exitCode=0 Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.563286 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86df3233-1d99-4023-9ff7-55bab063bd7e","Type":"ContainerDied","Data":"115a468a692fb19ce7f718c4cc8fdb83f4844c63e4e4a649b01cb6f4bca1c956"} Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.593617 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-czrmh-config-986vr"] Dec 09 11:45:57 crc kubenswrapper[4849]: E1209 11:45:57.594030 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e95ad6f6-4e91-4d3d-b456-36e5a1cc5969" containerName="mariadb-account-create-update" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.594047 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="e95ad6f6-4e91-4d3d-b456-36e5a1cc5969" containerName="mariadb-account-create-update" Dec 09 11:45:57 crc kubenswrapper[4849]: E1209 11:45:57.594070 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0feb1b2-1589-42f6-824d-431ae417ce09" containerName="mariadb-database-create" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.594079 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0feb1b2-1589-42f6-824d-431ae417ce09" containerName="mariadb-database-create" Dec 09 11:45:57 crc kubenswrapper[4849]: E1209 11:45:57.594100 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b3e656-264c-42c0-afd9-26b87f3b208e" containerName="mariadb-account-create-update" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.594111 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b3e656-264c-42c0-afd9-26b87f3b208e" containerName="mariadb-account-create-update" Dec 09 11:45:57 crc kubenswrapper[4849]: E1209 11:45:57.594129 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e9bdc85-b967-4475-a94b-4f4fa6e74c5b" containerName="mariadb-database-create" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.594137 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e9bdc85-b967-4475-a94b-4f4fa6e74c5b" containerName="mariadb-database-create" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.594359 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0feb1b2-1589-42f6-824d-431ae417ce09" containerName="mariadb-database-create" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.594377 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e9bdc85-b967-4475-a94b-4f4fa6e74c5b" containerName="mariadb-database-create" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.594389 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="e95ad6f6-4e91-4d3d-b456-36e5a1cc5969" containerName="mariadb-account-create-update" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.594403 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b3e656-264c-42c0-afd9-26b87f3b208e" containerName="mariadb-account-create-update" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.595109 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.606556 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.616709 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-czrmh-config-986vr"] Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.694790 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eae9805-7adc-47e1-9b64-5a33793bd7d3-scripts\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.694868 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7eae9805-7adc-47e1-9b64-5a33793bd7d3-additional-scripts\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.694944 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dknx2\" (UniqueName: \"kubernetes.io/projected/7eae9805-7adc-47e1-9b64-5a33793bd7d3-kube-api-access-dknx2\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.695039 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-log-ovn\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.695064 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-run-ovn\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.695134 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-run\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.796975 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-log-ovn\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.797037 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-run-ovn\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.797091 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-run\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.797127 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eae9805-7adc-47e1-9b64-5a33793bd7d3-scripts\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.797152 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7eae9805-7adc-47e1-9b64-5a33793bd7d3-additional-scripts\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.797392 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dknx2\" (UniqueName: \"kubernetes.io/projected/7eae9805-7adc-47e1-9b64-5a33793bd7d3-kube-api-access-dknx2\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.798398 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-run\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.798555 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-run-ovn\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.799255 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7eae9805-7adc-47e1-9b64-5a33793bd7d3-additional-scripts\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.800479 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eae9805-7adc-47e1-9b64-5a33793bd7d3-scripts\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.801555 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-log-ovn\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.826353 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dknx2\" (UniqueName: \"kubernetes.io/projected/7eae9805-7adc-47e1-9b64-5a33793bd7d3-kube-api-access-dknx2\") pod \"ovn-controller-czrmh-config-986vr\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:45:57 crc kubenswrapper[4849]: I1209 11:45:57.932756 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:46:02 crc kubenswrapper[4849]: I1209 11:46:02.347097 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-czrmh" podUID="69a39d69-d705-4246-bc77-cbdd3fadfefa" containerName="ovn-controller" probeResult="failure" output=< Dec 09 11:46:02 crc kubenswrapper[4849]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 09 11:46:02 crc kubenswrapper[4849]: > Dec 09 11:46:03 crc kubenswrapper[4849]: I1209 11:46:03.354606 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-czrmh-config-986vr"] Dec 09 11:46:03 crc kubenswrapper[4849]: I1209 11:46:03.619924 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-w67cr" event={"ID":"6c21ca38-f4ba-44cb-99db-914844f473d0","Type":"ContainerStarted","Data":"ac4f94b6c6e2a145a5339d30f59a9e8bfba7c929483f0d7c5d693d4533522a68"} Dec 09 11:46:03 crc kubenswrapper[4849]: I1209 11:46:03.621703 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-czrmh-config-986vr" event={"ID":"7eae9805-7adc-47e1-9b64-5a33793bd7d3","Type":"ContainerStarted","Data":"cf998f0e0c860f5567b1b3d0c300bbd5f71d21a903894eac970d7857f8f7f13b"} Dec 09 11:46:03 crc kubenswrapper[4849]: I1209 11:46:03.623689 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e5432d8-b092-46cd-8aab-cb194ebb23f7","Type":"ContainerStarted","Data":"c0d978b70fbfc8c0981e21553c2a3b181faf7f586d0fb63f069d3e51bf82a010"} Dec 09 11:46:03 crc kubenswrapper[4849]: I1209 11:46:03.624339 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:46:03 crc kubenswrapper[4849]: I1209 11:46:03.626170 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86df3233-1d99-4023-9ff7-55bab063bd7e","Type":"ContainerStarted","Data":"86f5be6bd5c6c96ac6fa32eb10fb12b3a6ff13232f47af92aa7367509967bd66"} Dec 09 11:46:03 crc kubenswrapper[4849]: I1209 11:46:03.626630 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 09 11:46:03 crc kubenswrapper[4849]: I1209 11:46:03.685361 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.074106566 podStartE2EDuration="1m20.685341697s" podCreationTimestamp="2025-12-09 11:44:43 +0000 UTC" firstStartedPulling="2025-12-09 11:44:45.869910553 +0000 UTC m=+1068.409794859" lastFinishedPulling="2025-12-09 11:45:23.481145664 +0000 UTC m=+1106.021029990" observedRunningTime="2025-12-09 11:46:03.6728007 +0000 UTC m=+1146.212685016" watchObservedRunningTime="2025-12-09 11:46:03.685341697 +0000 UTC m=+1146.225226023" Dec 09 11:46:03 crc kubenswrapper[4849]: I1209 11:46:03.688420 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-w67cr" podStartSLOduration=2.348962521 podStartE2EDuration="15.688378364s" podCreationTimestamp="2025-12-09 11:45:48 +0000 UTC" firstStartedPulling="2025-12-09 11:45:49.642889594 +0000 UTC m=+1132.182773910" lastFinishedPulling="2025-12-09 11:46:02.982305437 +0000 UTC m=+1145.522189753" observedRunningTime="2025-12-09 11:46:03.641845657 +0000 UTC m=+1146.181729973" watchObservedRunningTime="2025-12-09 11:46:03.688378364 +0000 UTC m=+1146.228262690" Dec 09 11:46:03 crc kubenswrapper[4849]: I1209 11:46:03.714187 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.57135764 podStartE2EDuration="1m21.714169934s" podCreationTimestamp="2025-12-09 11:44:42 +0000 UTC" firstStartedPulling="2025-12-09 11:44:45.494265472 +0000 UTC m=+1068.034149788" lastFinishedPulling="2025-12-09 11:45:23.637077766 +0000 UTC m=+1106.176962082" observedRunningTime="2025-12-09 11:46:03.71360215 +0000 UTC m=+1146.253486476" watchObservedRunningTime="2025-12-09 11:46:03.714169934 +0000 UTC m=+1146.254054250" Dec 09 11:46:04 crc kubenswrapper[4849]: I1209 11:46:04.634102 4849 generic.go:334] "Generic (PLEG): container finished" podID="7eae9805-7adc-47e1-9b64-5a33793bd7d3" containerID="0b51148149f3594154a79630a3802e503af16038d2fea050f55007307679f39c" exitCode=0 Dec 09 11:46:04 crc kubenswrapper[4849]: I1209 11:46:04.634209 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-czrmh-config-986vr" event={"ID":"7eae9805-7adc-47e1-9b64-5a33793bd7d3","Type":"ContainerDied","Data":"0b51148149f3594154a79630a3802e503af16038d2fea050f55007307679f39c"} Dec 09 11:46:05 crc kubenswrapper[4849]: I1209 11:46:05.998372 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.060285 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eae9805-7adc-47e1-9b64-5a33793bd7d3-scripts\") pod \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.060342 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-run\") pod \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.060368 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dknx2\" (UniqueName: \"kubernetes.io/projected/7eae9805-7adc-47e1-9b64-5a33793bd7d3-kube-api-access-dknx2\") pod \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.060475 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-log-ovn\") pod \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.060548 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-run" (OuterVolumeSpecName: "var-run") pod "7eae9805-7adc-47e1-9b64-5a33793bd7d3" (UID: "7eae9805-7adc-47e1-9b64-5a33793bd7d3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.060629 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7eae9805-7adc-47e1-9b64-5a33793bd7d3" (UID: "7eae9805-7adc-47e1-9b64-5a33793bd7d3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.060697 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7eae9805-7adc-47e1-9b64-5a33793bd7d3-additional-scripts\") pod \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.061249 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eae9805-7adc-47e1-9b64-5a33793bd7d3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7eae9805-7adc-47e1-9b64-5a33793bd7d3" (UID: "7eae9805-7adc-47e1-9b64-5a33793bd7d3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.061336 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-run-ovn\") pod \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\" (UID: \"7eae9805-7adc-47e1-9b64-5a33793bd7d3\") " Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.061417 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7eae9805-7adc-47e1-9b64-5a33793bd7d3" (UID: "7eae9805-7adc-47e1-9b64-5a33793bd7d3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.061699 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eae9805-7adc-47e1-9b64-5a33793bd7d3-scripts" (OuterVolumeSpecName: "scripts") pod "7eae9805-7adc-47e1-9b64-5a33793bd7d3" (UID: "7eae9805-7adc-47e1-9b64-5a33793bd7d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.061920 4849 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7eae9805-7adc-47e1-9b64-5a33793bd7d3-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.061944 4849 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.061955 4849 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eae9805-7adc-47e1-9b64-5a33793bd7d3-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.061966 4849 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-run\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.061976 4849 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7eae9805-7adc-47e1-9b64-5a33793bd7d3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.075781 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eae9805-7adc-47e1-9b64-5a33793bd7d3-kube-api-access-dknx2" (OuterVolumeSpecName: "kube-api-access-dknx2") pod "7eae9805-7adc-47e1-9b64-5a33793bd7d3" (UID: "7eae9805-7adc-47e1-9b64-5a33793bd7d3"). InnerVolumeSpecName "kube-api-access-dknx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.163928 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dknx2\" (UniqueName: \"kubernetes.io/projected/7eae9805-7adc-47e1-9b64-5a33793bd7d3-kube-api-access-dknx2\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.655811 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-czrmh-config-986vr" event={"ID":"7eae9805-7adc-47e1-9b64-5a33793bd7d3","Type":"ContainerDied","Data":"cf998f0e0c860f5567b1b3d0c300bbd5f71d21a903894eac970d7857f8f7f13b"} Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.656143 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf998f0e0c860f5567b1b3d0c300bbd5f71d21a903894eac970d7857f8f7f13b" Dec 09 11:46:06 crc kubenswrapper[4849]: I1209 11:46:06.655887 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-czrmh-config-986vr" Dec 09 11:46:07 crc kubenswrapper[4849]: I1209 11:46:07.106399 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-czrmh-config-986vr"] Dec 09 11:46:07 crc kubenswrapper[4849]: I1209 11:46:07.114285 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-czrmh-config-986vr"] Dec 09 11:46:07 crc kubenswrapper[4849]: I1209 11:46:07.278171 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-czrmh" Dec 09 11:46:08 crc kubenswrapper[4849]: I1209 11:46:08.547780 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eae9805-7adc-47e1-9b64-5a33793bd7d3" path="/var/lib/kubelet/pods/7eae9805-7adc-47e1-9b64-5a33793bd7d3/volumes" Dec 09 11:46:11 crc kubenswrapper[4849]: I1209 11:46:11.694113 4849 generic.go:334] "Generic (PLEG): container finished" podID="6c21ca38-f4ba-44cb-99db-914844f473d0" containerID="ac4f94b6c6e2a145a5339d30f59a9e8bfba7c929483f0d7c5d693d4533522a68" exitCode=0 Dec 09 11:46:11 crc kubenswrapper[4849]: I1209 11:46:11.694210 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-w67cr" event={"ID":"6c21ca38-f4ba-44cb-99db-914844f473d0","Type":"ContainerDied","Data":"ac4f94b6c6e2a145a5339d30f59a9e8bfba7c929483f0d7c5d693d4533522a68"} Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.064901 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-w67cr" Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.173747 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mssd6\" (UniqueName: \"kubernetes.io/projected/6c21ca38-f4ba-44cb-99db-914844f473d0-kube-api-access-mssd6\") pod \"6c21ca38-f4ba-44cb-99db-914844f473d0\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.173789 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-db-sync-config-data\") pod \"6c21ca38-f4ba-44cb-99db-914844f473d0\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.173838 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-config-data\") pod \"6c21ca38-f4ba-44cb-99db-914844f473d0\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.173871 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-combined-ca-bundle\") pod \"6c21ca38-f4ba-44cb-99db-914844f473d0\" (UID: \"6c21ca38-f4ba-44cb-99db-914844f473d0\") " Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.179778 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6c21ca38-f4ba-44cb-99db-914844f473d0" (UID: "6c21ca38-f4ba-44cb-99db-914844f473d0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.180363 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c21ca38-f4ba-44cb-99db-914844f473d0-kube-api-access-mssd6" (OuterVolumeSpecName: "kube-api-access-mssd6") pod "6c21ca38-f4ba-44cb-99db-914844f473d0" (UID: "6c21ca38-f4ba-44cb-99db-914844f473d0"). InnerVolumeSpecName "kube-api-access-mssd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.201397 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c21ca38-f4ba-44cb-99db-914844f473d0" (UID: "6c21ca38-f4ba-44cb-99db-914844f473d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.222856 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-config-data" (OuterVolumeSpecName: "config-data") pod "6c21ca38-f4ba-44cb-99db-914844f473d0" (UID: "6c21ca38-f4ba-44cb-99db-914844f473d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.276383 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mssd6\" (UniqueName: \"kubernetes.io/projected/6c21ca38-f4ba-44cb-99db-914844f473d0-kube-api-access-mssd6\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.276600 4849 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.276722 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.276808 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c21ca38-f4ba-44cb-99db-914844f473d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.709654 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-w67cr" event={"ID":"6c21ca38-f4ba-44cb-99db-914844f473d0","Type":"ContainerDied","Data":"8ce35d26dff1a1e5a46753fe44c429b9c08cef4c281aeb2b62669ad46f7fc9bd"} Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.709740 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ce35d26dff1a1e5a46753fe44c429b9c08cef4c281aeb2b62669ad46f7fc9bd" Dec 09 11:46:13 crc kubenswrapper[4849]: I1209 11:46:13.709706 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-w67cr" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.179799 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4gbqd"] Dec 09 11:46:14 crc kubenswrapper[4849]: E1209 11:46:14.180227 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c21ca38-f4ba-44cb-99db-914844f473d0" containerName="glance-db-sync" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.180251 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c21ca38-f4ba-44cb-99db-914844f473d0" containerName="glance-db-sync" Dec 09 11:46:14 crc kubenswrapper[4849]: E1209 11:46:14.180286 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eae9805-7adc-47e1-9b64-5a33793bd7d3" containerName="ovn-config" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.180294 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eae9805-7adc-47e1-9b64-5a33793bd7d3" containerName="ovn-config" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.180533 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eae9805-7adc-47e1-9b64-5a33793bd7d3" containerName="ovn-config" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.180554 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c21ca38-f4ba-44cb-99db-914844f473d0" containerName="glance-db-sync" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.181636 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.186320 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4gbqd"] Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.323384 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-4gbqd\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.323872 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf5qp\" (UniqueName: \"kubernetes.io/projected/be5d3d9b-b033-4b73-8044-1064dd5d4443-kube-api-access-wf5qp\") pod \"dnsmasq-dns-54f9b7b8d9-4gbqd\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.323932 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-4gbqd\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.323976 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-4gbqd\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.324040 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-config\") pod \"dnsmasq-dns-54f9b7b8d9-4gbqd\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.426524 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-4gbqd\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.426580 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-config\") pod \"dnsmasq-dns-54f9b7b8d9-4gbqd\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.426666 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-4gbqd\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.426708 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf5qp\" (UniqueName: \"kubernetes.io/projected/be5d3d9b-b033-4b73-8044-1064dd5d4443-kube-api-access-wf5qp\") pod \"dnsmasq-dns-54f9b7b8d9-4gbqd\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.426743 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-4gbqd\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.427671 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-4gbqd\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.427697 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-4gbqd\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.427844 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-4gbqd\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.427954 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-config\") pod \"dnsmasq-dns-54f9b7b8d9-4gbqd\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.446449 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf5qp\" (UniqueName: \"kubernetes.io/projected/be5d3d9b-b033-4b73-8044-1064dd5d4443-kube-api-access-wf5qp\") pod \"dnsmasq-dns-54f9b7b8d9-4gbqd\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.506758 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:14 crc kubenswrapper[4849]: I1209 11:46:14.752640 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.070607 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4gbqd"] Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.240566 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.349556 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-xqqlw"] Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.350537 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xqqlw" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.472537 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e146-account-create-update-nmmnl"] Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.473517 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e146-account-create-update-nmmnl" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.486420 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.554451 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5wxj\" (UniqueName: \"kubernetes.io/projected/588f68a7-71b1-409a-9abc-ff1e7d6683f9-kube-api-access-b5wxj\") pod \"cinder-db-create-xqqlw\" (UID: \"588f68a7-71b1-409a-9abc-ff1e7d6683f9\") " pod="openstack/cinder-db-create-xqqlw" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.554495 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91a8ccf8-c54d-4a8c-a679-281e06d136da-operator-scripts\") pod \"cinder-e146-account-create-update-nmmnl\" (UID: \"91a8ccf8-c54d-4a8c-a679-281e06d136da\") " pod="openstack/cinder-e146-account-create-update-nmmnl" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.554578 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z59x9\" (UniqueName: \"kubernetes.io/projected/91a8ccf8-c54d-4a8c-a679-281e06d136da-kube-api-access-z59x9\") pod \"cinder-e146-account-create-update-nmmnl\" (UID: \"91a8ccf8-c54d-4a8c-a679-281e06d136da\") " pod="openstack/cinder-e146-account-create-update-nmmnl" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.554657 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/588f68a7-71b1-409a-9abc-ff1e7d6683f9-operator-scripts\") pod \"cinder-db-create-xqqlw\" (UID: \"588f68a7-71b1-409a-9abc-ff1e7d6683f9\") " pod="openstack/cinder-db-create-xqqlw" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.580673 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xqqlw"] Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.648175 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e146-account-create-update-nmmnl"] Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.656052 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z59x9\" (UniqueName: \"kubernetes.io/projected/91a8ccf8-c54d-4a8c-a679-281e06d136da-kube-api-access-z59x9\") pod \"cinder-e146-account-create-update-nmmnl\" (UID: \"91a8ccf8-c54d-4a8c-a679-281e06d136da\") " pod="openstack/cinder-e146-account-create-update-nmmnl" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.656120 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/588f68a7-71b1-409a-9abc-ff1e7d6683f9-operator-scripts\") pod \"cinder-db-create-xqqlw\" (UID: \"588f68a7-71b1-409a-9abc-ff1e7d6683f9\") " pod="openstack/cinder-db-create-xqqlw" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.656185 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5wxj\" (UniqueName: \"kubernetes.io/projected/588f68a7-71b1-409a-9abc-ff1e7d6683f9-kube-api-access-b5wxj\") pod \"cinder-db-create-xqqlw\" (UID: \"588f68a7-71b1-409a-9abc-ff1e7d6683f9\") " pod="openstack/cinder-db-create-xqqlw" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.656206 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91a8ccf8-c54d-4a8c-a679-281e06d136da-operator-scripts\") pod \"cinder-e146-account-create-update-nmmnl\" (UID: \"91a8ccf8-c54d-4a8c-a679-281e06d136da\") " pod="openstack/cinder-e146-account-create-update-nmmnl" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.658112 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91a8ccf8-c54d-4a8c-a679-281e06d136da-operator-scripts\") pod \"cinder-e146-account-create-update-nmmnl\" (UID: \"91a8ccf8-c54d-4a8c-a679-281e06d136da\") " pod="openstack/cinder-e146-account-create-update-nmmnl" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.658127 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/588f68a7-71b1-409a-9abc-ff1e7d6683f9-operator-scripts\") pod \"cinder-db-create-xqqlw\" (UID: \"588f68a7-71b1-409a-9abc-ff1e7d6683f9\") " pod="openstack/cinder-db-create-xqqlw" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.724604 4849 generic.go:334] "Generic (PLEG): container finished" podID="be5d3d9b-b033-4b73-8044-1064dd5d4443" containerID="e0679b16f9682dfd91b75d712115ab7b9e1f0d12b19b4459c7175b9df06420b9" exitCode=0 Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.724646 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" event={"ID":"be5d3d9b-b033-4b73-8044-1064dd5d4443","Type":"ContainerDied","Data":"e0679b16f9682dfd91b75d712115ab7b9e1f0d12b19b4459c7175b9df06420b9"} Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.724671 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" event={"ID":"be5d3d9b-b033-4b73-8044-1064dd5d4443","Type":"ContainerStarted","Data":"3b25dec948a988e2bf06b6e0a3a0ac1cca571dca129faf0e1e44f87c4d089b2e"} Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.780176 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5wxj\" (UniqueName: \"kubernetes.io/projected/588f68a7-71b1-409a-9abc-ff1e7d6683f9-kube-api-access-b5wxj\") pod \"cinder-db-create-xqqlw\" (UID: \"588f68a7-71b1-409a-9abc-ff1e7d6683f9\") " pod="openstack/cinder-db-create-xqqlw" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.783010 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z59x9\" (UniqueName: \"kubernetes.io/projected/91a8ccf8-c54d-4a8c-a679-281e06d136da-kube-api-access-z59x9\") pod \"cinder-e146-account-create-update-nmmnl\" (UID: \"91a8ccf8-c54d-4a8c-a679-281e06d136da\") " pod="openstack/cinder-e146-account-create-update-nmmnl" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.794897 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-4bvc5"] Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.796211 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4bvc5" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.848699 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e146-account-create-update-nmmnl" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.861097 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4bvc5"] Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.962009 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjnwh\" (UniqueName: \"kubernetes.io/projected/373b7741-fc4b-4182-8e48-1120d1ba867b-kube-api-access-sjnwh\") pod \"barbican-db-create-4bvc5\" (UID: \"373b7741-fc4b-4182-8e48-1120d1ba867b\") " pod="openstack/barbican-db-create-4bvc5" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.962356 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373b7741-fc4b-4182-8e48-1120d1ba867b-operator-scripts\") pod \"barbican-db-create-4bvc5\" (UID: \"373b7741-fc4b-4182-8e48-1120d1ba867b\") " pod="openstack/barbican-db-create-4bvc5" Dec 09 11:46:15 crc kubenswrapper[4849]: I1209 11:46:15.963878 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xqqlw" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.068567 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjnwh\" (UniqueName: \"kubernetes.io/projected/373b7741-fc4b-4182-8e48-1120d1ba867b-kube-api-access-sjnwh\") pod \"barbican-db-create-4bvc5\" (UID: \"373b7741-fc4b-4182-8e48-1120d1ba867b\") " pod="openstack/barbican-db-create-4bvc5" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.068615 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373b7741-fc4b-4182-8e48-1120d1ba867b-operator-scripts\") pod \"barbican-db-create-4bvc5\" (UID: \"373b7741-fc4b-4182-8e48-1120d1ba867b\") " pod="openstack/barbican-db-create-4bvc5" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.069311 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373b7741-fc4b-4182-8e48-1120d1ba867b-operator-scripts\") pod \"barbican-db-create-4bvc5\" (UID: \"373b7741-fc4b-4182-8e48-1120d1ba867b\") " pod="openstack/barbican-db-create-4bvc5" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.116017 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjnwh\" (UniqueName: \"kubernetes.io/projected/373b7741-fc4b-4182-8e48-1120d1ba867b-kube-api-access-sjnwh\") pod \"barbican-db-create-4bvc5\" (UID: \"373b7741-fc4b-4182-8e48-1120d1ba867b\") " pod="openstack/barbican-db-create-4bvc5" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.188027 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4994-account-create-update-cghqb"] Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.192001 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4994-account-create-update-cghqb" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.210437 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4bvc5" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.276871 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.286929 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4994-account-create-update-cghqb"] Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.293104 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2zpd\" (UniqueName: \"kubernetes.io/projected/10ecb332-bacf-4550-93d1-2e3cb5f9e3f8-kube-api-access-l2zpd\") pod \"barbican-4994-account-create-update-cghqb\" (UID: \"10ecb332-bacf-4550-93d1-2e3cb5f9e3f8\") " pod="openstack/barbican-4994-account-create-update-cghqb" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.293394 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10ecb332-bacf-4550-93d1-2e3cb5f9e3f8-operator-scripts\") pod \"barbican-4994-account-create-update-cghqb\" (UID: \"10ecb332-bacf-4550-93d1-2e3cb5f9e3f8\") " pod="openstack/barbican-4994-account-create-update-cghqb" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.342882 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-s8dwb"] Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.344152 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s8dwb" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.394590 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2zpd\" (UniqueName: \"kubernetes.io/projected/10ecb332-bacf-4550-93d1-2e3cb5f9e3f8-kube-api-access-l2zpd\") pod \"barbican-4994-account-create-update-cghqb\" (UID: \"10ecb332-bacf-4550-93d1-2e3cb5f9e3f8\") " pod="openstack/barbican-4994-account-create-update-cghqb" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.394680 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10ecb332-bacf-4550-93d1-2e3cb5f9e3f8-operator-scripts\") pod \"barbican-4994-account-create-update-cghqb\" (UID: \"10ecb332-bacf-4550-93d1-2e3cb5f9e3f8\") " pod="openstack/barbican-4994-account-create-update-cghqb" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.395309 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10ecb332-bacf-4550-93d1-2e3cb5f9e3f8-operator-scripts\") pod \"barbican-4994-account-create-update-cghqb\" (UID: \"10ecb332-bacf-4550-93d1-2e3cb5f9e3f8\") " pod="openstack/barbican-4994-account-create-update-cghqb" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.395569 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-s8dwb"] Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.443214 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2zpd\" (UniqueName: \"kubernetes.io/projected/10ecb332-bacf-4550-93d1-2e3cb5f9e3f8-kube-api-access-l2zpd\") pod \"barbican-4994-account-create-update-cghqb\" (UID: \"10ecb332-bacf-4550-93d1-2e3cb5f9e3f8\") " pod="openstack/barbican-4994-account-create-update-cghqb" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.443291 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-gjkvp"] Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.458597 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gjkvp" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.476946 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.477106 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.477311 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rqcxg" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.477480 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.499665 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smmjq\" (UniqueName: \"kubernetes.io/projected/c83943fe-425d-41b5-80c0-2ab81180e474-kube-api-access-smmjq\") pod \"neutron-db-create-s8dwb\" (UID: \"c83943fe-425d-41b5-80c0-2ab81180e474\") " pod="openstack/neutron-db-create-s8dwb" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.499800 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c83943fe-425d-41b5-80c0-2ab81180e474-operator-scripts\") pod \"neutron-db-create-s8dwb\" (UID: \"c83943fe-425d-41b5-80c0-2ab81180e474\") " pod="openstack/neutron-db-create-s8dwb" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.531781 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gjkvp"] Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.558682 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4994-account-create-update-cghqb" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.576015 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-40e7-account-create-update-6hzsv"] Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.577782 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-40e7-account-create-update-6hzsv" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.581688 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.590394 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-40e7-account-create-update-6hzsv"] Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.613561 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c83943fe-425d-41b5-80c0-2ab81180e474-operator-scripts\") pod \"neutron-db-create-s8dwb\" (UID: \"c83943fe-425d-41b5-80c0-2ab81180e474\") " pod="openstack/neutron-db-create-s8dwb" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.613949 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4dg6\" (UniqueName: \"kubernetes.io/projected/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-kube-api-access-f4dg6\") pod \"keystone-db-sync-gjkvp\" (UID: \"0134bbaa-98fd-401f-96b5-addf0aa2ce7d\") " pod="openstack/keystone-db-sync-gjkvp" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.613995 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-combined-ca-bundle\") pod \"keystone-db-sync-gjkvp\" (UID: \"0134bbaa-98fd-401f-96b5-addf0aa2ce7d\") " pod="openstack/keystone-db-sync-gjkvp" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.614148 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smmjq\" (UniqueName: \"kubernetes.io/projected/c83943fe-425d-41b5-80c0-2ab81180e474-kube-api-access-smmjq\") pod \"neutron-db-create-s8dwb\" (UID: \"c83943fe-425d-41b5-80c0-2ab81180e474\") " pod="openstack/neutron-db-create-s8dwb" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.614199 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-config-data\") pod \"keystone-db-sync-gjkvp\" (UID: \"0134bbaa-98fd-401f-96b5-addf0aa2ce7d\") " pod="openstack/keystone-db-sync-gjkvp" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.615194 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c83943fe-425d-41b5-80c0-2ab81180e474-operator-scripts\") pod \"neutron-db-create-s8dwb\" (UID: \"c83943fe-425d-41b5-80c0-2ab81180e474\") " pod="openstack/neutron-db-create-s8dwb" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.682471 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smmjq\" (UniqueName: \"kubernetes.io/projected/c83943fe-425d-41b5-80c0-2ab81180e474-kube-api-access-smmjq\") pod \"neutron-db-create-s8dwb\" (UID: \"c83943fe-425d-41b5-80c0-2ab81180e474\") " pod="openstack/neutron-db-create-s8dwb" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.715342 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37e652b8-2c10-4b35-8986-9f3178ff0556-operator-scripts\") pod \"neutron-40e7-account-create-update-6hzsv\" (UID: \"37e652b8-2c10-4b35-8986-9f3178ff0556\") " pod="openstack/neutron-40e7-account-create-update-6hzsv" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.715404 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lccw6\" (UniqueName: \"kubernetes.io/projected/37e652b8-2c10-4b35-8986-9f3178ff0556-kube-api-access-lccw6\") pod \"neutron-40e7-account-create-update-6hzsv\" (UID: \"37e652b8-2c10-4b35-8986-9f3178ff0556\") " pod="openstack/neutron-40e7-account-create-update-6hzsv" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.715463 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4dg6\" (UniqueName: \"kubernetes.io/projected/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-kube-api-access-f4dg6\") pod \"keystone-db-sync-gjkvp\" (UID: \"0134bbaa-98fd-401f-96b5-addf0aa2ce7d\") " pod="openstack/keystone-db-sync-gjkvp" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.715494 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-combined-ca-bundle\") pod \"keystone-db-sync-gjkvp\" (UID: \"0134bbaa-98fd-401f-96b5-addf0aa2ce7d\") " pod="openstack/keystone-db-sync-gjkvp" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.715603 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-config-data\") pod \"keystone-db-sync-gjkvp\" (UID: \"0134bbaa-98fd-401f-96b5-addf0aa2ce7d\") " pod="openstack/keystone-db-sync-gjkvp" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.726304 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-combined-ca-bundle\") pod \"keystone-db-sync-gjkvp\" (UID: \"0134bbaa-98fd-401f-96b5-addf0aa2ce7d\") " pod="openstack/keystone-db-sync-gjkvp" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.735480 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-config-data\") pod \"keystone-db-sync-gjkvp\" (UID: \"0134bbaa-98fd-401f-96b5-addf0aa2ce7d\") " pod="openstack/keystone-db-sync-gjkvp" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.736611 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e146-account-create-update-nmmnl"] Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.766697 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" event={"ID":"be5d3d9b-b033-4b73-8044-1064dd5d4443","Type":"ContainerStarted","Data":"6c982557461dadcf37aa08a92a036299bd84eaf2b86871158b690ef960e4af60"} Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.767757 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.769163 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4dg6\" (UniqueName: \"kubernetes.io/projected/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-kube-api-access-f4dg6\") pod \"keystone-db-sync-gjkvp\" (UID: \"0134bbaa-98fd-401f-96b5-addf0aa2ce7d\") " pod="openstack/keystone-db-sync-gjkvp" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.816999 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37e652b8-2c10-4b35-8986-9f3178ff0556-operator-scripts\") pod \"neutron-40e7-account-create-update-6hzsv\" (UID: \"37e652b8-2c10-4b35-8986-9f3178ff0556\") " pod="openstack/neutron-40e7-account-create-update-6hzsv" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.817043 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lccw6\" (UniqueName: \"kubernetes.io/projected/37e652b8-2c10-4b35-8986-9f3178ff0556-kube-api-access-lccw6\") pod \"neutron-40e7-account-create-update-6hzsv\" (UID: \"37e652b8-2c10-4b35-8986-9f3178ff0556\") " pod="openstack/neutron-40e7-account-create-update-6hzsv" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.819191 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" podStartSLOduration=2.818802983 podStartE2EDuration="2.818802983s" podCreationTimestamp="2025-12-09 11:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:46:16.808970596 +0000 UTC m=+1159.348854922" watchObservedRunningTime="2025-12-09 11:46:16.818802983 +0000 UTC m=+1159.358687299" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.821398 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37e652b8-2c10-4b35-8986-9f3178ff0556-operator-scripts\") pod \"neutron-40e7-account-create-update-6hzsv\" (UID: \"37e652b8-2c10-4b35-8986-9f3178ff0556\") " pod="openstack/neutron-40e7-account-create-update-6hzsv" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.833338 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gjkvp" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.877830 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lccw6\" (UniqueName: \"kubernetes.io/projected/37e652b8-2c10-4b35-8986-9f3178ff0556-kube-api-access-lccw6\") pod \"neutron-40e7-account-create-update-6hzsv\" (UID: \"37e652b8-2c10-4b35-8986-9f3178ff0556\") " pod="openstack/neutron-40e7-account-create-update-6hzsv" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.936190 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-40e7-account-create-update-6hzsv" Dec 09 11:46:16 crc kubenswrapper[4849]: I1209 11:46:16.979118 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s8dwb" Dec 09 11:46:17 crc kubenswrapper[4849]: I1209 11:46:17.097518 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xqqlw"] Dec 09 11:46:17 crc kubenswrapper[4849]: W1209 11:46:17.102202 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod588f68a7_71b1_409a_9abc_ff1e7d6683f9.slice/crio-653b63a13089c40d765e7bf0f9f2803d7f5de0376b40103836ce5c6e1f65ca06 WatchSource:0}: Error finding container 653b63a13089c40d765e7bf0f9f2803d7f5de0376b40103836ce5c6e1f65ca06: Status 404 returned error can't find the container with id 653b63a13089c40d765e7bf0f9f2803d7f5de0376b40103836ce5c6e1f65ca06 Dec 09 11:46:17 crc kubenswrapper[4849]: I1209 11:46:17.226380 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4bvc5"] Dec 09 11:46:17 crc kubenswrapper[4849]: W1209 11:46:17.246559 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod373b7741_fc4b_4182_8e48_1120d1ba867b.slice/crio-8784a3c0792c3d1afcf8dd5d4c2a0a27a2c81bc6fb0a97a14a60b427751f12fc WatchSource:0}: Error finding container 8784a3c0792c3d1afcf8dd5d4c2a0a27a2c81bc6fb0a97a14a60b427751f12fc: Status 404 returned error can't find the container with id 8784a3c0792c3d1afcf8dd5d4c2a0a27a2c81bc6fb0a97a14a60b427751f12fc Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:17.410540 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gjkvp"] Dec 09 11:46:18 crc kubenswrapper[4849]: W1209 11:46:17.450679 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0134bbaa_98fd_401f_96b5_addf0aa2ce7d.slice/crio-1f2698b14a7f1263e961c8695ae9d15129e03d57a9cf0c7c7e7225b7171c00a0 WatchSource:0}: Error finding container 1f2698b14a7f1263e961c8695ae9d15129e03d57a9cf0c7c7e7225b7171c00a0: Status 404 returned error can't find the container with id 1f2698b14a7f1263e961c8695ae9d15129e03d57a9cf0c7c7e7225b7171c00a0 Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:17.483627 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4994-account-create-update-cghqb"] Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:17.824935 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e146-account-create-update-nmmnl" event={"ID":"91a8ccf8-c54d-4a8c-a679-281e06d136da","Type":"ContainerStarted","Data":"12bb21ef6cc32c55a57dba03d9e27d2fd0d1fe37df84ebb2981b73389864171f"} Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:17.830967 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e146-account-create-update-nmmnl" event={"ID":"91a8ccf8-c54d-4a8c-a679-281e06d136da","Type":"ContainerStarted","Data":"ee6c0cdc869ffbccf6bd737e860142dfa65f4af6494e05de6098b16c329d5444"} Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:17.848353 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4994-account-create-update-cghqb" event={"ID":"10ecb332-bacf-4550-93d1-2e3cb5f9e3f8","Type":"ContainerStarted","Data":"e2a05172e56acbcbc5821bb87c789456a2e608c1f7b248b307cc491672d80047"} Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:17.872441 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xqqlw" event={"ID":"588f68a7-71b1-409a-9abc-ff1e7d6683f9","Type":"ContainerStarted","Data":"9bf0a228e6bde28b69c2717ceab428b83fcef3c698ba9521eb0a56195a936b4a"} Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:17.872487 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xqqlw" event={"ID":"588f68a7-71b1-409a-9abc-ff1e7d6683f9","Type":"ContainerStarted","Data":"653b63a13089c40d765e7bf0f9f2803d7f5de0376b40103836ce5c6e1f65ca06"} Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:17.878442 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gjkvp" event={"ID":"0134bbaa-98fd-401f-96b5-addf0aa2ce7d","Type":"ContainerStarted","Data":"1f2698b14a7f1263e961c8695ae9d15129e03d57a9cf0c7c7e7225b7171c00a0"} Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:17.883298 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-e146-account-create-update-nmmnl" podStartSLOduration=2.883283815 podStartE2EDuration="2.883283815s" podCreationTimestamp="2025-12-09 11:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:46:17.8730983 +0000 UTC m=+1160.412982626" watchObservedRunningTime="2025-12-09 11:46:17.883283815 +0000 UTC m=+1160.423168131" Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:17.903365 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4bvc5" event={"ID":"373b7741-fc4b-4182-8e48-1120d1ba867b","Type":"ContainerStarted","Data":"8784a3c0792c3d1afcf8dd5d4c2a0a27a2c81bc6fb0a97a14a60b427751f12fc"} Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:17.928828 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-xqqlw" podStartSLOduration=2.9288104710000002 podStartE2EDuration="2.928810471s" podCreationTimestamp="2025-12-09 11:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:46:17.89673713 +0000 UTC m=+1160.436621446" watchObservedRunningTime="2025-12-09 11:46:17.928810471 +0000 UTC m=+1160.468694777" Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:17.936717 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-4bvc5" podStartSLOduration=2.936698538 podStartE2EDuration="2.936698538s" podCreationTimestamp="2025-12-09 11:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:46:17.926793221 +0000 UTC m=+1160.466677537" watchObservedRunningTime="2025-12-09 11:46:17.936698538 +0000 UTC m=+1160.476582854" Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:18.834227 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-s8dwb"] Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:18.911898 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s8dwb" event={"ID":"c83943fe-425d-41b5-80c0-2ab81180e474","Type":"ContainerStarted","Data":"db6e95c455e6a64f96d946359d55476e27611eb2e3e7cae894c24269e6a68aa3"} Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:18.913074 4849 generic.go:334] "Generic (PLEG): container finished" podID="373b7741-fc4b-4182-8e48-1120d1ba867b" containerID="ace995a118bf9847ffecf65f8c3e8166ce8cd5c14447f7081e70e9e3353d3289" exitCode=0 Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:18.913117 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4bvc5" event={"ID":"373b7741-fc4b-4182-8e48-1120d1ba867b","Type":"ContainerDied","Data":"ace995a118bf9847ffecf65f8c3e8166ce8cd5c14447f7081e70e9e3353d3289"} Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:18.945804 4849 generic.go:334] "Generic (PLEG): container finished" podID="91a8ccf8-c54d-4a8c-a679-281e06d136da" containerID="12bb21ef6cc32c55a57dba03d9e27d2fd0d1fe37df84ebb2981b73389864171f" exitCode=0 Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:18.946235 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e146-account-create-update-nmmnl" event={"ID":"91a8ccf8-c54d-4a8c-a679-281e06d136da","Type":"ContainerDied","Data":"12bb21ef6cc32c55a57dba03d9e27d2fd0d1fe37df84ebb2981b73389864171f"} Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:18.959817 4849 generic.go:334] "Generic (PLEG): container finished" podID="10ecb332-bacf-4550-93d1-2e3cb5f9e3f8" containerID="cbb022a85de05d6168155a8fe29307ac0df6f9a396791bb03a2c6d83391e0692" exitCode=0 Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:18.959921 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4994-account-create-update-cghqb" event={"ID":"10ecb332-bacf-4550-93d1-2e3cb5f9e3f8","Type":"ContainerDied","Data":"cbb022a85de05d6168155a8fe29307ac0df6f9a396791bb03a2c6d83391e0692"} Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:18.961379 4849 generic.go:334] "Generic (PLEG): container finished" podID="588f68a7-71b1-409a-9abc-ff1e7d6683f9" containerID="9bf0a228e6bde28b69c2717ceab428b83fcef3c698ba9521eb0a56195a936b4a" exitCode=0 Dec 09 11:46:18 crc kubenswrapper[4849]: I1209 11:46:18.962240 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xqqlw" event={"ID":"588f68a7-71b1-409a-9abc-ff1e7d6683f9","Type":"ContainerDied","Data":"9bf0a228e6bde28b69c2717ceab428b83fcef3c698ba9521eb0a56195a936b4a"} Dec 09 11:46:19 crc kubenswrapper[4849]: I1209 11:46:19.009692 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-40e7-account-create-update-6hzsv"] Dec 09 11:46:19 crc kubenswrapper[4849]: I1209 11:46:19.975170 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-40e7-account-create-update-6hzsv" event={"ID":"37e652b8-2c10-4b35-8986-9f3178ff0556","Type":"ContainerStarted","Data":"c6cfc62c1c5be286a9994626a9907c5e1427610a6f1d6522cf3a8d44fd3d4099"} Dec 09 11:46:19 crc kubenswrapper[4849]: I1209 11:46:19.975546 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-40e7-account-create-update-6hzsv" event={"ID":"37e652b8-2c10-4b35-8986-9f3178ff0556","Type":"ContainerStarted","Data":"8792d64bafad30b770340a020fbc021a22aff007d33e617c8f51cb13054036ab"} Dec 09 11:46:19 crc kubenswrapper[4849]: I1209 11:46:19.979493 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s8dwb" event={"ID":"c83943fe-425d-41b5-80c0-2ab81180e474","Type":"ContainerStarted","Data":"46ab55db8f827157cb0cb13cf84f10490fb52f2f479f61d5cf8644805f8d1896"} Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:19.997854 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-40e7-account-create-update-6hzsv" podStartSLOduration=3.997838356 podStartE2EDuration="3.997838356s" podCreationTimestamp="2025-12-09 11:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:46:19.99640542 +0000 UTC m=+1162.536289736" watchObservedRunningTime="2025-12-09 11:46:19.997838356 +0000 UTC m=+1162.537722672" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.017702 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-s8dwb" podStartSLOduration=4.017683571 podStartE2EDuration="4.017683571s" podCreationTimestamp="2025-12-09 11:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:46:20.009944207 +0000 UTC m=+1162.549828523" watchObservedRunningTime="2025-12-09 11:46:20.017683571 +0000 UTC m=+1162.557567887" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.495481 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e146-account-create-update-nmmnl" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.570687 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91a8ccf8-c54d-4a8c-a679-281e06d136da-operator-scripts\") pod \"91a8ccf8-c54d-4a8c-a679-281e06d136da\" (UID: \"91a8ccf8-c54d-4a8c-a679-281e06d136da\") " Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.570837 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z59x9\" (UniqueName: \"kubernetes.io/projected/91a8ccf8-c54d-4a8c-a679-281e06d136da-kube-api-access-z59x9\") pod \"91a8ccf8-c54d-4a8c-a679-281e06d136da\" (UID: \"91a8ccf8-c54d-4a8c-a679-281e06d136da\") " Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.571521 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91a8ccf8-c54d-4a8c-a679-281e06d136da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91a8ccf8-c54d-4a8c-a679-281e06d136da" (UID: "91a8ccf8-c54d-4a8c-a679-281e06d136da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.590805 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91a8ccf8-c54d-4a8c-a679-281e06d136da-kube-api-access-z59x9" (OuterVolumeSpecName: "kube-api-access-z59x9") pod "91a8ccf8-c54d-4a8c-a679-281e06d136da" (UID: "91a8ccf8-c54d-4a8c-a679-281e06d136da"). InnerVolumeSpecName "kube-api-access-z59x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.662342 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xqqlw" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.674431 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z59x9\" (UniqueName: \"kubernetes.io/projected/91a8ccf8-c54d-4a8c-a679-281e06d136da-kube-api-access-z59x9\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.674496 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91a8ccf8-c54d-4a8c-a679-281e06d136da-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.691617 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4bvc5" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.719094 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4994-account-create-update-cghqb" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.776605 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/373b7741-fc4b-4182-8e48-1120d1ba867b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "373b7741-fc4b-4182-8e48-1120d1ba867b" (UID: "373b7741-fc4b-4182-8e48-1120d1ba867b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.777520 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373b7741-fc4b-4182-8e48-1120d1ba867b-operator-scripts\") pod \"373b7741-fc4b-4182-8e48-1120d1ba867b\" (UID: \"373b7741-fc4b-4182-8e48-1120d1ba867b\") " Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.777678 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10ecb332-bacf-4550-93d1-2e3cb5f9e3f8-operator-scripts\") pod \"10ecb332-bacf-4550-93d1-2e3cb5f9e3f8\" (UID: \"10ecb332-bacf-4550-93d1-2e3cb5f9e3f8\") " Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.777739 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/588f68a7-71b1-409a-9abc-ff1e7d6683f9-operator-scripts\") pod \"588f68a7-71b1-409a-9abc-ff1e7d6683f9\" (UID: \"588f68a7-71b1-409a-9abc-ff1e7d6683f9\") " Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.777777 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2zpd\" (UniqueName: \"kubernetes.io/projected/10ecb332-bacf-4550-93d1-2e3cb5f9e3f8-kube-api-access-l2zpd\") pod \"10ecb332-bacf-4550-93d1-2e3cb5f9e3f8\" (UID: \"10ecb332-bacf-4550-93d1-2e3cb5f9e3f8\") " Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.777823 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjnwh\" (UniqueName: \"kubernetes.io/projected/373b7741-fc4b-4182-8e48-1120d1ba867b-kube-api-access-sjnwh\") pod \"373b7741-fc4b-4182-8e48-1120d1ba867b\" (UID: \"373b7741-fc4b-4182-8e48-1120d1ba867b\") " Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.777843 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5wxj\" (UniqueName: \"kubernetes.io/projected/588f68a7-71b1-409a-9abc-ff1e7d6683f9-kube-api-access-b5wxj\") pod \"588f68a7-71b1-409a-9abc-ff1e7d6683f9\" (UID: \"588f68a7-71b1-409a-9abc-ff1e7d6683f9\") " Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.778260 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373b7741-fc4b-4182-8e48-1120d1ba867b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.780188 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/588f68a7-71b1-409a-9abc-ff1e7d6683f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "588f68a7-71b1-409a-9abc-ff1e7d6683f9" (UID: "588f68a7-71b1-409a-9abc-ff1e7d6683f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.780698 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10ecb332-bacf-4550-93d1-2e3cb5f9e3f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10ecb332-bacf-4550-93d1-2e3cb5f9e3f8" (UID: "10ecb332-bacf-4550-93d1-2e3cb5f9e3f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.784420 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10ecb332-bacf-4550-93d1-2e3cb5f9e3f8-kube-api-access-l2zpd" (OuterVolumeSpecName: "kube-api-access-l2zpd") pod "10ecb332-bacf-4550-93d1-2e3cb5f9e3f8" (UID: "10ecb332-bacf-4550-93d1-2e3cb5f9e3f8"). InnerVolumeSpecName "kube-api-access-l2zpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.784774 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588f68a7-71b1-409a-9abc-ff1e7d6683f9-kube-api-access-b5wxj" (OuterVolumeSpecName: "kube-api-access-b5wxj") pod "588f68a7-71b1-409a-9abc-ff1e7d6683f9" (UID: "588f68a7-71b1-409a-9abc-ff1e7d6683f9"). InnerVolumeSpecName "kube-api-access-b5wxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.784960 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/373b7741-fc4b-4182-8e48-1120d1ba867b-kube-api-access-sjnwh" (OuterVolumeSpecName: "kube-api-access-sjnwh") pod "373b7741-fc4b-4182-8e48-1120d1ba867b" (UID: "373b7741-fc4b-4182-8e48-1120d1ba867b"). InnerVolumeSpecName "kube-api-access-sjnwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.880517 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjnwh\" (UniqueName: \"kubernetes.io/projected/373b7741-fc4b-4182-8e48-1120d1ba867b-kube-api-access-sjnwh\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.880562 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5wxj\" (UniqueName: \"kubernetes.io/projected/588f68a7-71b1-409a-9abc-ff1e7d6683f9-kube-api-access-b5wxj\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.880576 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10ecb332-bacf-4550-93d1-2e3cb5f9e3f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.880586 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/588f68a7-71b1-409a-9abc-ff1e7d6683f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.880596 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2zpd\" (UniqueName: \"kubernetes.io/projected/10ecb332-bacf-4550-93d1-2e3cb5f9e3f8-kube-api-access-l2zpd\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.988156 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e146-account-create-update-nmmnl" event={"ID":"91a8ccf8-c54d-4a8c-a679-281e06d136da","Type":"ContainerDied","Data":"ee6c0cdc869ffbccf6bd737e860142dfa65f4af6494e05de6098b16c329d5444"} Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.988203 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee6c0cdc869ffbccf6bd737e860142dfa65f4af6494e05de6098b16c329d5444" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.988274 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e146-account-create-update-nmmnl" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.994492 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4994-account-create-update-cghqb" event={"ID":"10ecb332-bacf-4550-93d1-2e3cb5f9e3f8","Type":"ContainerDied","Data":"e2a05172e56acbcbc5821bb87c789456a2e608c1f7b248b307cc491672d80047"} Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.994519 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a05172e56acbcbc5821bb87c789456a2e608c1f7b248b307cc491672d80047" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.994565 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4994-account-create-update-cghqb" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.998647 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xqqlw" event={"ID":"588f68a7-71b1-409a-9abc-ff1e7d6683f9","Type":"ContainerDied","Data":"653b63a13089c40d765e7bf0f9f2803d7f5de0376b40103836ce5c6e1f65ca06"} Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.998689 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="653b63a13089c40d765e7bf0f9f2803d7f5de0376b40103836ce5c6e1f65ca06" Dec 09 11:46:20 crc kubenswrapper[4849]: I1209 11:46:20.998751 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xqqlw" Dec 09 11:46:21 crc kubenswrapper[4849]: I1209 11:46:21.007049 4849 generic.go:334] "Generic (PLEG): container finished" podID="37e652b8-2c10-4b35-8986-9f3178ff0556" containerID="c6cfc62c1c5be286a9994626a9907c5e1427610a6f1d6522cf3a8d44fd3d4099" exitCode=0 Dec 09 11:46:21 crc kubenswrapper[4849]: I1209 11:46:21.007145 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-40e7-account-create-update-6hzsv" event={"ID":"37e652b8-2c10-4b35-8986-9f3178ff0556","Type":"ContainerDied","Data":"c6cfc62c1c5be286a9994626a9907c5e1427610a6f1d6522cf3a8d44fd3d4099"} Dec 09 11:46:21 crc kubenswrapper[4849]: I1209 11:46:21.020314 4849 generic.go:334] "Generic (PLEG): container finished" podID="c83943fe-425d-41b5-80c0-2ab81180e474" containerID="46ab55db8f827157cb0cb13cf84f10490fb52f2f479f61d5cf8644805f8d1896" exitCode=0 Dec 09 11:46:21 crc kubenswrapper[4849]: I1209 11:46:21.020464 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s8dwb" event={"ID":"c83943fe-425d-41b5-80c0-2ab81180e474","Type":"ContainerDied","Data":"46ab55db8f827157cb0cb13cf84f10490fb52f2f479f61d5cf8644805f8d1896"} Dec 09 11:46:21 crc kubenswrapper[4849]: I1209 11:46:21.025840 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4bvc5" event={"ID":"373b7741-fc4b-4182-8e48-1120d1ba867b","Type":"ContainerDied","Data":"8784a3c0792c3d1afcf8dd5d4c2a0a27a2c81bc6fb0a97a14a60b427751f12fc"} Dec 09 11:46:21 crc kubenswrapper[4849]: I1209 11:46:21.025879 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8784a3c0792c3d1afcf8dd5d4c2a0a27a2c81bc6fb0a97a14a60b427751f12fc" Dec 09 11:46:21 crc kubenswrapper[4849]: I1209 11:46:21.026132 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4bvc5" Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.052838 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-40e7-account-create-update-6hzsv" event={"ID":"37e652b8-2c10-4b35-8986-9f3178ff0556","Type":"ContainerDied","Data":"8792d64bafad30b770340a020fbc021a22aff007d33e617c8f51cb13054036ab"} Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.053191 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8792d64bafad30b770340a020fbc021a22aff007d33e617c8f51cb13054036ab" Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.055424 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s8dwb" event={"ID":"c83943fe-425d-41b5-80c0-2ab81180e474","Type":"ContainerDied","Data":"db6e95c455e6a64f96d946359d55476e27611eb2e3e7cae894c24269e6a68aa3"} Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.055459 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db6e95c455e6a64f96d946359d55476e27611eb2e3e7cae894c24269e6a68aa3" Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.060234 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s8dwb" Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.129987 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-40e7-account-create-update-6hzsv" Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.133439 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c83943fe-425d-41b5-80c0-2ab81180e474-operator-scripts\") pod \"c83943fe-425d-41b5-80c0-2ab81180e474\" (UID: \"c83943fe-425d-41b5-80c0-2ab81180e474\") " Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.133547 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smmjq\" (UniqueName: \"kubernetes.io/projected/c83943fe-425d-41b5-80c0-2ab81180e474-kube-api-access-smmjq\") pod \"c83943fe-425d-41b5-80c0-2ab81180e474\" (UID: \"c83943fe-425d-41b5-80c0-2ab81180e474\") " Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.134042 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c83943fe-425d-41b5-80c0-2ab81180e474-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c83943fe-425d-41b5-80c0-2ab81180e474" (UID: "c83943fe-425d-41b5-80c0-2ab81180e474"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.134272 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c83943fe-425d-41b5-80c0-2ab81180e474-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.137931 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c83943fe-425d-41b5-80c0-2ab81180e474-kube-api-access-smmjq" (OuterVolumeSpecName: "kube-api-access-smmjq") pod "c83943fe-425d-41b5-80c0-2ab81180e474" (UID: "c83943fe-425d-41b5-80c0-2ab81180e474"). InnerVolumeSpecName "kube-api-access-smmjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.235750 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37e652b8-2c10-4b35-8986-9f3178ff0556-operator-scripts\") pod \"37e652b8-2c10-4b35-8986-9f3178ff0556\" (UID: \"37e652b8-2c10-4b35-8986-9f3178ff0556\") " Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.235903 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lccw6\" (UniqueName: \"kubernetes.io/projected/37e652b8-2c10-4b35-8986-9f3178ff0556-kube-api-access-lccw6\") pod \"37e652b8-2c10-4b35-8986-9f3178ff0556\" (UID: \"37e652b8-2c10-4b35-8986-9f3178ff0556\") " Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.236360 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smmjq\" (UniqueName: \"kubernetes.io/projected/c83943fe-425d-41b5-80c0-2ab81180e474-kube-api-access-smmjq\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.236818 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37e652b8-2c10-4b35-8986-9f3178ff0556-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37e652b8-2c10-4b35-8986-9f3178ff0556" (UID: "37e652b8-2c10-4b35-8986-9f3178ff0556"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.241089 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e652b8-2c10-4b35-8986-9f3178ff0556-kube-api-access-lccw6" (OuterVolumeSpecName: "kube-api-access-lccw6") pod "37e652b8-2c10-4b35-8986-9f3178ff0556" (UID: "37e652b8-2c10-4b35-8986-9f3178ff0556"). InnerVolumeSpecName "kube-api-access-lccw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.338352 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37e652b8-2c10-4b35-8986-9f3178ff0556-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.338387 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lccw6\" (UniqueName: \"kubernetes.io/projected/37e652b8-2c10-4b35-8986-9f3178ff0556-kube-api-access-lccw6\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.511655 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.609743 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vxhgv"] Dec 09 11:46:24 crc kubenswrapper[4849]: I1209 11:46:24.610622 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" podUID="e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9" containerName="dnsmasq-dns" containerID="cri-o://0b0abbb896d4f2a29eadeded67dc3b2b9705c1bee2c164d6e717b4a010e2735b" gracePeriod=10 Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.070251 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gjkvp" event={"ID":"0134bbaa-98fd-401f-96b5-addf0aa2ce7d","Type":"ContainerStarted","Data":"a596edae188ebe7c5fb3747e2e471aee9227fe7538ab2fe110b22e04d0fd65f6"} Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.071657 4849 generic.go:334] "Generic (PLEG): container finished" podID="e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9" containerID="0b0abbb896d4f2a29eadeded67dc3b2b9705c1bee2c164d6e717b4a010e2735b" exitCode=0 Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.071733 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-40e7-account-create-update-6hzsv" Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.072211 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" event={"ID":"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9","Type":"ContainerDied","Data":"0b0abbb896d4f2a29eadeded67dc3b2b9705c1bee2c164d6e717b4a010e2735b"} Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.072235 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" event={"ID":"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9","Type":"ContainerDied","Data":"861b56c1c77b2af2ec1e513843b2aa00e28fb26ee3a4e65ed88c7ce799fadff8"} Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.072248 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="861b56c1c77b2af2ec1e513843b2aa00e28fb26ee3a4e65ed88c7ce799fadff8" Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.072292 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s8dwb" Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.084045 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.097188 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-gjkvp" podStartSLOduration=2.583528387 podStartE2EDuration="9.097170195s" podCreationTimestamp="2025-12-09 11:46:16 +0000 UTC" firstStartedPulling="2025-12-09 11:46:17.460018208 +0000 UTC m=+1159.999902524" lastFinishedPulling="2025-12-09 11:46:23.973660026 +0000 UTC m=+1166.513544332" observedRunningTime="2025-12-09 11:46:25.096853687 +0000 UTC m=+1167.636738013" watchObservedRunningTime="2025-12-09 11:46:25.097170195 +0000 UTC m=+1167.637054511" Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.153120 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z9t7\" (UniqueName: \"kubernetes.io/projected/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-kube-api-access-9z9t7\") pod \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.153361 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-dns-svc\") pod \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.153523 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-ovsdbserver-sb\") pod \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.153566 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-ovsdbserver-nb\") pod \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.153648 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-config\") pod \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\" (UID: \"e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9\") " Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.165416 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-kube-api-access-9z9t7" (OuterVolumeSpecName: "kube-api-access-9z9t7") pod "e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9" (UID: "e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9"). InnerVolumeSpecName "kube-api-access-9z9t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.209393 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9" (UID: "e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.219063 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9" (UID: "e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.225838 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-config" (OuterVolumeSpecName: "config") pod "e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9" (UID: "e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.231352 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9" (UID: "e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.257023 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.257057 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z9t7\" (UniqueName: \"kubernetes.io/projected/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-kube-api-access-9z9t7\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.257073 4849 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.257085 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:25 crc kubenswrapper[4849]: I1209 11:46:25.257095 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:26 crc kubenswrapper[4849]: I1209 11:46:26.079866 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" Dec 09 11:46:26 crc kubenswrapper[4849]: I1209 11:46:26.141368 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vxhgv"] Dec 09 11:46:26 crc kubenswrapper[4849]: I1209 11:46:26.156910 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vxhgv"] Dec 09 11:46:26 crc kubenswrapper[4849]: I1209 11:46:26.546243 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9" path="/var/lib/kubelet/pods/e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9/volumes" Dec 09 11:46:28 crc kubenswrapper[4849]: I1209 11:46:28.097087 4849 generic.go:334] "Generic (PLEG): container finished" podID="0134bbaa-98fd-401f-96b5-addf0aa2ce7d" containerID="a596edae188ebe7c5fb3747e2e471aee9227fe7538ab2fe110b22e04d0fd65f6" exitCode=0 Dec 09 11:46:28 crc kubenswrapper[4849]: I1209 11:46:28.097458 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gjkvp" event={"ID":"0134bbaa-98fd-401f-96b5-addf0aa2ce7d","Type":"ContainerDied","Data":"a596edae188ebe7c5fb3747e2e471aee9227fe7538ab2fe110b22e04d0fd65f6"} Dec 09 11:46:29 crc kubenswrapper[4849]: I1209 11:46:29.533620 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gjkvp" Dec 09 11:46:29 crc kubenswrapper[4849]: I1209 11:46:29.648521 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-config-data\") pod \"0134bbaa-98fd-401f-96b5-addf0aa2ce7d\" (UID: \"0134bbaa-98fd-401f-96b5-addf0aa2ce7d\") " Dec 09 11:46:29 crc kubenswrapper[4849]: I1209 11:46:29.648748 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4dg6\" (UniqueName: \"kubernetes.io/projected/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-kube-api-access-f4dg6\") pod \"0134bbaa-98fd-401f-96b5-addf0aa2ce7d\" (UID: \"0134bbaa-98fd-401f-96b5-addf0aa2ce7d\") " Dec 09 11:46:29 crc kubenswrapper[4849]: I1209 11:46:29.648787 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-combined-ca-bundle\") pod \"0134bbaa-98fd-401f-96b5-addf0aa2ce7d\" (UID: \"0134bbaa-98fd-401f-96b5-addf0aa2ce7d\") " Dec 09 11:46:29 crc kubenswrapper[4849]: I1209 11:46:29.655194 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-kube-api-access-f4dg6" (OuterVolumeSpecName: "kube-api-access-f4dg6") pod "0134bbaa-98fd-401f-96b5-addf0aa2ce7d" (UID: "0134bbaa-98fd-401f-96b5-addf0aa2ce7d"). InnerVolumeSpecName "kube-api-access-f4dg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:46:29 crc kubenswrapper[4849]: I1209 11:46:29.672332 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0134bbaa-98fd-401f-96b5-addf0aa2ce7d" (UID: "0134bbaa-98fd-401f-96b5-addf0aa2ce7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:46:29 crc kubenswrapper[4849]: I1209 11:46:29.690729 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-config-data" (OuterVolumeSpecName: "config-data") pod "0134bbaa-98fd-401f-96b5-addf0aa2ce7d" (UID: "0134bbaa-98fd-401f-96b5-addf0aa2ce7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:46:29 crc kubenswrapper[4849]: I1209 11:46:29.751089 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:29 crc kubenswrapper[4849]: I1209 11:46:29.751161 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4dg6\" (UniqueName: \"kubernetes.io/projected/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-kube-api-access-f4dg6\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:29 crc kubenswrapper[4849]: I1209 11:46:29.751171 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0134bbaa-98fd-401f-96b5-addf0aa2ce7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:29 crc kubenswrapper[4849]: I1209 11:46:29.969122 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-vxhgv" podUID="e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.116880 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gjkvp" event={"ID":"0134bbaa-98fd-401f-96b5-addf0aa2ce7d","Type":"ContainerDied","Data":"1f2698b14a7f1263e961c8695ae9d15129e03d57a9cf0c7c7e7225b7171c00a0"} Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.116916 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gjkvp" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.116929 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f2698b14a7f1263e961c8695ae9d15129e03d57a9cf0c7c7e7225b7171c00a0" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.411957 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nrl4d"] Dec 09 11:46:30 crc kubenswrapper[4849]: E1209 11:46:30.412333 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a8ccf8-c54d-4a8c-a679-281e06d136da" containerName="mariadb-account-create-update" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412356 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a8ccf8-c54d-4a8c-a679-281e06d136da" containerName="mariadb-account-create-update" Dec 09 11:46:30 crc kubenswrapper[4849]: E1209 11:46:30.412376 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9" containerName="init" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412382 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9" containerName="init" Dec 09 11:46:30 crc kubenswrapper[4849]: E1209 11:46:30.412396 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588f68a7-71b1-409a-9abc-ff1e7d6683f9" containerName="mariadb-database-create" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412423 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="588f68a7-71b1-409a-9abc-ff1e7d6683f9" containerName="mariadb-database-create" Dec 09 11:46:30 crc kubenswrapper[4849]: E1209 11:46:30.412442 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83943fe-425d-41b5-80c0-2ab81180e474" containerName="mariadb-database-create" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412449 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83943fe-425d-41b5-80c0-2ab81180e474" containerName="mariadb-database-create" Dec 09 11:46:30 crc kubenswrapper[4849]: E1209 11:46:30.412466 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0134bbaa-98fd-401f-96b5-addf0aa2ce7d" containerName="keystone-db-sync" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412474 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="0134bbaa-98fd-401f-96b5-addf0aa2ce7d" containerName="keystone-db-sync" Dec 09 11:46:30 crc kubenswrapper[4849]: E1209 11:46:30.412483 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9" containerName="dnsmasq-dns" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412490 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9" containerName="dnsmasq-dns" Dec 09 11:46:30 crc kubenswrapper[4849]: E1209 11:46:30.412503 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ecb332-bacf-4550-93d1-2e3cb5f9e3f8" containerName="mariadb-account-create-update" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412511 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ecb332-bacf-4550-93d1-2e3cb5f9e3f8" containerName="mariadb-account-create-update" Dec 09 11:46:30 crc kubenswrapper[4849]: E1209 11:46:30.412522 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e652b8-2c10-4b35-8986-9f3178ff0556" containerName="mariadb-account-create-update" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412530 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e652b8-2c10-4b35-8986-9f3178ff0556" containerName="mariadb-account-create-update" Dec 09 11:46:30 crc kubenswrapper[4849]: E1209 11:46:30.412549 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373b7741-fc4b-4182-8e48-1120d1ba867b" containerName="mariadb-database-create" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412557 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="373b7741-fc4b-4182-8e48-1120d1ba867b" containerName="mariadb-database-create" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412783 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="0134bbaa-98fd-401f-96b5-addf0aa2ce7d" containerName="keystone-db-sync" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412822 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f2cd59-9a0e-4711-90a5-a2dc0b8857b9" containerName="dnsmasq-dns" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412841 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="588f68a7-71b1-409a-9abc-ff1e7d6683f9" containerName="mariadb-database-create" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412865 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83943fe-425d-41b5-80c0-2ab81180e474" containerName="mariadb-database-create" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412888 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="37e652b8-2c10-4b35-8986-9f3178ff0556" containerName="mariadb-account-create-update" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412907 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a8ccf8-c54d-4a8c-a679-281e06d136da" containerName="mariadb-account-create-update" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412920 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="373b7741-fc4b-4182-8e48-1120d1ba867b" containerName="mariadb-database-create" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.412933 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="10ecb332-bacf-4550-93d1-2e3cb5f9e3f8" containerName="mariadb-account-create-update" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.413852 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.433463 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nrl4d"] Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.449295 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mvpf9"] Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.450643 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.478611 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.478833 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.478957 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rqcxg" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.479176 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.479322 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.480658 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpgnt\" (UniqueName: \"kubernetes.io/projected/e57021f3-19cd-4765-8f7b-a8cf451bbd70-kube-api-access-bpgnt\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.480774 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-scripts\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.480900 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-combined-ca-bundle\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.480978 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-config-data\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.481178 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-credential-keys\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.481286 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-fernet-keys\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.491884 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mvpf9"] Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.584663 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-credential-keys\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.584755 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kw9q\" (UniqueName: \"kubernetes.io/projected/7ce82e74-f22d-4720-b298-95d0251583f6-kube-api-access-5kw9q\") pod \"dnsmasq-dns-6546db6db7-nrl4d\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.584796 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-nrl4d\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.584833 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-fernet-keys\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.584882 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-nrl4d\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.584945 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-dns-svc\") pod \"dnsmasq-dns-6546db6db7-nrl4d\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.585003 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-config\") pod \"dnsmasq-dns-6546db6db7-nrl4d\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.590710 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpgnt\" (UniqueName: \"kubernetes.io/projected/e57021f3-19cd-4765-8f7b-a8cf451bbd70-kube-api-access-bpgnt\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.590797 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-scripts\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.590924 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-combined-ca-bundle\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.590954 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-config-data\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.602389 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-combined-ca-bundle\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.621323 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-config-data\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.625098 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-scripts\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.657978 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-credential-keys\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.660576 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpgnt\" (UniqueName: \"kubernetes.io/projected/e57021f3-19cd-4765-8f7b-a8cf451bbd70-kube-api-access-bpgnt\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.666215 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-fernet-keys\") pod \"keystone-bootstrap-mvpf9\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.694261 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kw9q\" (UniqueName: \"kubernetes.io/projected/7ce82e74-f22d-4720-b298-95d0251583f6-kube-api-access-5kw9q\") pod \"dnsmasq-dns-6546db6db7-nrl4d\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.694304 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-nrl4d\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.694335 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-nrl4d\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.694380 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-dns-svc\") pod \"dnsmasq-dns-6546db6db7-nrl4d\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.694426 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-config\") pod \"dnsmasq-dns-6546db6db7-nrl4d\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.696740 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-nrl4d\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.697224 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-nrl4d\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.697896 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-config\") pod \"dnsmasq-dns-6546db6db7-nrl4d\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.698027 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-dns-svc\") pod \"dnsmasq-dns-6546db6db7-nrl4d\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.750729 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kw9q\" (UniqueName: \"kubernetes.io/projected/7ce82e74-f22d-4720-b298-95d0251583f6-kube-api-access-5kw9q\") pod \"dnsmasq-dns-6546db6db7-nrl4d\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.791592 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.807836 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.939714 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-nmgsr"] Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.940817 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.953705 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-5gjhd"] Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.956039 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.956211 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2rmx9" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.956503 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.964351 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5gjhd" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.964679 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nmgsr"] Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.979289 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-x68vd" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.979664 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 09 11:46:30 crc kubenswrapper[4849]: I1209 11:46:30.979662 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.007304 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df8301f3-a405-47fc-b1a8-475daf544079-etc-machine-id\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.009650 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-config-data\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.009807 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-db-sync-config-data\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.009962 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-scripts\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.010062 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-combined-ca-bundle\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.010148 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvlfs\" (UniqueName: \"kubernetes.io/projected/df8301f3-a405-47fc-b1a8-475daf544079-kube-api-access-kvlfs\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.036941 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5gjhd"] Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.055292 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-7mnkd"] Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.066588 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7mnkd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.098021 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2d75s" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.098696 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.112076 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-combined-ca-bundle\") pod \"neutron-db-sync-5gjhd\" (UID: \"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe\") " pod="openstack/neutron-db-sync-5gjhd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.112134 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df8301f3-a405-47fc-b1a8-475daf544079-etc-machine-id\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.112163 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggrq5\" (UniqueName: \"kubernetes.io/projected/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-kube-api-access-ggrq5\") pod \"neutron-db-sync-5gjhd\" (UID: \"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe\") " pod="openstack/neutron-db-sync-5gjhd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.112204 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-config-data\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.112227 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-db-sync-config-data\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.112259 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-config\") pod \"neutron-db-sync-5gjhd\" (UID: \"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe\") " pod="openstack/neutron-db-sync-5gjhd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.112298 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-scripts\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.112323 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-combined-ca-bundle\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.112345 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvlfs\" (UniqueName: \"kubernetes.io/projected/df8301f3-a405-47fc-b1a8-475daf544079-kube-api-access-kvlfs\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.118577 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df8301f3-a405-47fc-b1a8-475daf544079-etc-machine-id\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.128979 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-scripts\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.129120 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-combined-ca-bundle\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.132305 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-db-sync-config-data\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.134785 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-config-data\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.183852 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7mnkd"] Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.214546 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-db-sync-config-data\") pod \"barbican-db-sync-7mnkd\" (UID: \"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944\") " pod="openstack/barbican-db-sync-7mnkd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.214623 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v6bx\" (UniqueName: \"kubernetes.io/projected/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-kube-api-access-9v6bx\") pod \"barbican-db-sync-7mnkd\" (UID: \"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944\") " pod="openstack/barbican-db-sync-7mnkd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.214659 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-combined-ca-bundle\") pod \"neutron-db-sync-5gjhd\" (UID: \"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe\") " pod="openstack/neutron-db-sync-5gjhd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.214694 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggrq5\" (UniqueName: \"kubernetes.io/projected/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-kube-api-access-ggrq5\") pod \"neutron-db-sync-5gjhd\" (UID: \"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe\") " pod="openstack/neutron-db-sync-5gjhd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.214741 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-combined-ca-bundle\") pod \"barbican-db-sync-7mnkd\" (UID: \"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944\") " pod="openstack/barbican-db-sync-7mnkd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.214769 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-config\") pod \"neutron-db-sync-5gjhd\" (UID: \"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe\") " pod="openstack/neutron-db-sync-5gjhd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.227203 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nrl4d"] Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.234944 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-combined-ca-bundle\") pod \"neutron-db-sync-5gjhd\" (UID: \"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe\") " pod="openstack/neutron-db-sync-5gjhd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.238082 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-config\") pod \"neutron-db-sync-5gjhd\" (UID: \"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe\") " pod="openstack/neutron-db-sync-5gjhd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.266262 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvlfs\" (UniqueName: \"kubernetes.io/projected/df8301f3-a405-47fc-b1a8-475daf544079-kube-api-access-kvlfs\") pod \"cinder-db-sync-nmgsr\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.289921 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.292554 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-xqkc4"] Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.293877 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.294509 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.299836 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggrq5\" (UniqueName: \"kubernetes.io/projected/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-kube-api-access-ggrq5\") pod \"neutron-db-sync-5gjhd\" (UID: \"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe\") " pod="openstack/neutron-db-sync-5gjhd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.300750 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.306299 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.306461 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.317265 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.318079 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-combined-ca-bundle\") pod \"barbican-db-sync-7mnkd\" (UID: \"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944\") " pod="openstack/barbican-db-sync-7mnkd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.318198 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-db-sync-config-data\") pod \"barbican-db-sync-7mnkd\" (UID: \"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944\") " pod="openstack/barbican-db-sync-7mnkd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.318238 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v6bx\" (UniqueName: \"kubernetes.io/projected/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-kube-api-access-9v6bx\") pod \"barbican-db-sync-7mnkd\" (UID: \"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944\") " pod="openstack/barbican-db-sync-7mnkd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.348268 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-db-sync-config-data\") pod \"barbican-db-sync-7mnkd\" (UID: \"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944\") " pod="openstack/barbican-db-sync-7mnkd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.352173 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-combined-ca-bundle\") pod \"barbican-db-sync-7mnkd\" (UID: \"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944\") " pod="openstack/barbican-db-sync-7mnkd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.365595 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5gjhd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.372013 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v6bx\" (UniqueName: \"kubernetes.io/projected/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-kube-api-access-9v6bx\") pod \"barbican-db-sync-7mnkd\" (UID: \"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944\") " pod="openstack/barbican-db-sync-7mnkd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.373232 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-xqkc4"] Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.412966 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-4wpm9"] Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.423491 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7mnkd" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.435956 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.438209 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4wpm9"] Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.441212 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sstqq" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.441477 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.444723 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-config\") pod \"dnsmasq-dns-7987f74bbc-xqkc4\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.444782 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-xqkc4\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.444803 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d0bfb8-146b-4d21-8e81-f2cef3d99489-log-httpd\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.444826 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c76k9\" (UniqueName: \"kubernetes.io/projected/dd84db06-a743-4726-bd6e-e694e3a17011-kube-api-access-c76k9\") pod \"dnsmasq-dns-7987f74bbc-xqkc4\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.444851 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hx2l\" (UniqueName: \"kubernetes.io/projected/75d0bfb8-146b-4d21-8e81-f2cef3d99489-kube-api-access-8hx2l\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.444875 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d0bfb8-146b-4d21-8e81-f2cef3d99489-run-httpd\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.444902 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-config-data\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.444932 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-xqkc4\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.444956 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.444992 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-xqkc4\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.445012 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-scripts\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.445036 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.453428 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.546895 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.546952 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-logs\") pod \"placement-db-sync-4wpm9\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.546980 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-config\") pod \"dnsmasq-dns-7987f74bbc-xqkc4\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.547017 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-xqkc4\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.547037 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d0bfb8-146b-4d21-8e81-f2cef3d99489-log-httpd\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.547057 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmmwq\" (UniqueName: \"kubernetes.io/projected/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-kube-api-access-zmmwq\") pod \"placement-db-sync-4wpm9\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.547074 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c76k9\" (UniqueName: \"kubernetes.io/projected/dd84db06-a743-4726-bd6e-e694e3a17011-kube-api-access-c76k9\") pod \"dnsmasq-dns-7987f74bbc-xqkc4\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.547096 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hx2l\" (UniqueName: \"kubernetes.io/projected/75d0bfb8-146b-4d21-8e81-f2cef3d99489-kube-api-access-8hx2l\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.547119 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d0bfb8-146b-4d21-8e81-f2cef3d99489-run-httpd\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.547137 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-scripts\") pod \"placement-db-sync-4wpm9\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.547171 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-config-data\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.547189 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-config-data\") pod \"placement-db-sync-4wpm9\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.547213 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-xqkc4\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.547239 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.547271 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-combined-ca-bundle\") pod \"placement-db-sync-4wpm9\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.547294 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-xqkc4\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.547313 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-scripts\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.547941 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-xqkc4\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.549300 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d0bfb8-146b-4d21-8e81-f2cef3d99489-log-httpd\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.549904 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d0bfb8-146b-4d21-8e81-f2cef3d99489-run-httpd\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.550838 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-xqkc4\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.554774 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-config\") pod \"dnsmasq-dns-7987f74bbc-xqkc4\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.558637 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-scripts\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.559505 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.559739 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-config-data\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.559930 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-xqkc4\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.571924 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c76k9\" (UniqueName: \"kubernetes.io/projected/dd84db06-a743-4726-bd6e-e694e3a17011-kube-api-access-c76k9\") pod \"dnsmasq-dns-7987f74bbc-xqkc4\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.572591 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.575154 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hx2l\" (UniqueName: \"kubernetes.io/projected/75d0bfb8-146b-4d21-8e81-f2cef3d99489-kube-api-access-8hx2l\") pod \"ceilometer-0\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.651507 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-logs\") pod \"placement-db-sync-4wpm9\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.651614 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmmwq\" (UniqueName: \"kubernetes.io/projected/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-kube-api-access-zmmwq\") pod \"placement-db-sync-4wpm9\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.651662 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-scripts\") pod \"placement-db-sync-4wpm9\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.651719 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-config-data\") pod \"placement-db-sync-4wpm9\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.651795 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-combined-ca-bundle\") pod \"placement-db-sync-4wpm9\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.652044 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-logs\") pod \"placement-db-sync-4wpm9\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.663161 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-scripts\") pod \"placement-db-sync-4wpm9\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.664608 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.673589 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.675183 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmmwq\" (UniqueName: \"kubernetes.io/projected/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-kube-api-access-zmmwq\") pod \"placement-db-sync-4wpm9\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.675912 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-combined-ca-bundle\") pod \"placement-db-sync-4wpm9\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.676788 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-config-data\") pod \"placement-db-sync-4wpm9\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.793245 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4wpm9" Dec 09 11:46:31 crc kubenswrapper[4849]: I1209 11:46:31.843663 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mvpf9"] Dec 09 11:46:32 crc kubenswrapper[4849]: I1209 11:46:32.052976 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nrl4d"] Dec 09 11:46:32 crc kubenswrapper[4849]: I1209 11:46:32.153758 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nmgsr"] Dec 09 11:46:32 crc kubenswrapper[4849]: I1209 11:46:32.169375 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" event={"ID":"7ce82e74-f22d-4720-b298-95d0251583f6","Type":"ContainerStarted","Data":"5adee003bacce960adb59b030ed1738b8d4af69eaa57a6cfafda931ba90a869a"} Dec 09 11:46:32 crc kubenswrapper[4849]: I1209 11:46:32.171745 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mvpf9" event={"ID":"e57021f3-19cd-4765-8f7b-a8cf451bbd70","Type":"ContainerStarted","Data":"49e0d3f7e0bb78f27ead8a24bebcb778066aa545fdd2108d6d37f9f7598056be"} Dec 09 11:46:32 crc kubenswrapper[4849]: I1209 11:46:32.340083 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7mnkd"] Dec 09 11:46:32 crc kubenswrapper[4849]: W1209 11:46:32.367124 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c9d5eb2_c2a5_4493_ab04_e8483f1efafe.slice/crio-f59fafd75b17c66ba2655ca91e86797102887117c1d3339f2b362d03fc9e3fbc WatchSource:0}: Error finding container f59fafd75b17c66ba2655ca91e86797102887117c1d3339f2b362d03fc9e3fbc: Status 404 returned error can't find the container with id f59fafd75b17c66ba2655ca91e86797102887117c1d3339f2b362d03fc9e3fbc Dec 09 11:46:32 crc kubenswrapper[4849]: I1209 11:46:32.377434 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5gjhd"] Dec 09 11:46:32 crc kubenswrapper[4849]: I1209 11:46:32.447400 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4wpm9"] Dec 09 11:46:32 crc kubenswrapper[4849]: I1209 11:46:32.733494 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:46:32 crc kubenswrapper[4849]: I1209 11:46:32.785704 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-xqkc4"] Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.184452 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4wpm9" event={"ID":"c9d48847-f667-4f50-b9a1-d68bdf0a63a3","Type":"ContainerStarted","Data":"28470a168281102f91034cf950ba676e5e2f2ba2640fc44c56c79567d18d8013"} Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.187342 4849 generic.go:334] "Generic (PLEG): container finished" podID="dd84db06-a743-4726-bd6e-e694e3a17011" containerID="23d4539d903c668c14d8fccacc536191ef416392f7cfd579a2c82b9524117766" exitCode=0 Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.187400 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" event={"ID":"dd84db06-a743-4726-bd6e-e694e3a17011","Type":"ContainerDied","Data":"23d4539d903c668c14d8fccacc536191ef416392f7cfd579a2c82b9524117766"} Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.187434 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" event={"ID":"dd84db06-a743-4726-bd6e-e694e3a17011","Type":"ContainerStarted","Data":"4546f6c881a481584cec02ca37cf370ddc196566c0a44673fd1c2b9b61570f8c"} Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.189703 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7mnkd" event={"ID":"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944","Type":"ContainerStarted","Data":"c52a0d46293d807bb1d96d9c81eca575648bfa27a99cc0712e28a5067a468b5a"} Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.191591 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75d0bfb8-146b-4d21-8e81-f2cef3d99489","Type":"ContainerStarted","Data":"2346cc5b0e7f52a8a445bfef6562b88ce3c34649f23e418ac9450a56fe0aa729"} Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.207568 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5gjhd" event={"ID":"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe","Type":"ContainerStarted","Data":"576d6927c42e97461923e686de8ef9568980b84c7935cb3adb7eb3ddfbe47f9a"} Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.207612 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5gjhd" event={"ID":"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe","Type":"ContainerStarted","Data":"f59fafd75b17c66ba2655ca91e86797102887117c1d3339f2b362d03fc9e3fbc"} Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.243015 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmgsr" event={"ID":"df8301f3-a405-47fc-b1a8-475daf544079","Type":"ContainerStarted","Data":"231b9d52b3f20de67a8e5ac232628753127960037b884d9aae8643abb9742c42"} Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.255768 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mvpf9" event={"ID":"e57021f3-19cd-4765-8f7b-a8cf451bbd70","Type":"ContainerStarted","Data":"9a76b8ed763aa01ba5fbec4d9d92a6b47b5579237428e89fc6780928ccf4db97"} Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.270307 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-5gjhd" podStartSLOduration=3.270236822 podStartE2EDuration="3.270236822s" podCreationTimestamp="2025-12-09 11:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:46:33.23490759 +0000 UTC m=+1175.774791906" watchObservedRunningTime="2025-12-09 11:46:33.270236822 +0000 UTC m=+1175.810121148" Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.283826 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mvpf9" podStartSLOduration=3.283804611 podStartE2EDuration="3.283804611s" podCreationTimestamp="2025-12-09 11:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:46:33.281644856 +0000 UTC m=+1175.821529182" watchObservedRunningTime="2025-12-09 11:46:33.283804611 +0000 UTC m=+1175.823688927" Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.286444 4849 generic.go:334] "Generic (PLEG): container finished" podID="7ce82e74-f22d-4720-b298-95d0251583f6" containerID="5607bf28271c3cbd0ec228e9202c91239c39b2415d4b11ac6d3a7632008e565c" exitCode=0 Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.286500 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" event={"ID":"7ce82e74-f22d-4720-b298-95d0251583f6","Type":"ContainerDied","Data":"5607bf28271c3cbd0ec228e9202c91239c39b2415d4b11ac6d3a7632008e565c"} Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.895008 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.991627 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.994690 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-config\") pod \"7ce82e74-f22d-4720-b298-95d0251583f6\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.994757 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-ovsdbserver-nb\") pod \"7ce82e74-f22d-4720-b298-95d0251583f6\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.994789 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-dns-svc\") pod \"7ce82e74-f22d-4720-b298-95d0251583f6\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.994848 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kw9q\" (UniqueName: \"kubernetes.io/projected/7ce82e74-f22d-4720-b298-95d0251583f6-kube-api-access-5kw9q\") pod \"7ce82e74-f22d-4720-b298-95d0251583f6\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " Dec 09 11:46:33 crc kubenswrapper[4849]: I1209 11:46:33.994886 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-ovsdbserver-sb\") pod \"7ce82e74-f22d-4720-b298-95d0251583f6\" (UID: \"7ce82e74-f22d-4720-b298-95d0251583f6\") " Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.023738 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce82e74-f22d-4720-b298-95d0251583f6-kube-api-access-5kw9q" (OuterVolumeSpecName: "kube-api-access-5kw9q") pod "7ce82e74-f22d-4720-b298-95d0251583f6" (UID: "7ce82e74-f22d-4720-b298-95d0251583f6"). InnerVolumeSpecName "kube-api-access-5kw9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.032556 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7ce82e74-f22d-4720-b298-95d0251583f6" (UID: "7ce82e74-f22d-4720-b298-95d0251583f6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.044851 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ce82e74-f22d-4720-b298-95d0251583f6" (UID: "7ce82e74-f22d-4720-b298-95d0251583f6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.070117 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ce82e74-f22d-4720-b298-95d0251583f6" (UID: "7ce82e74-f22d-4720-b298-95d0251583f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.098432 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.098483 4849 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.098501 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kw9q\" (UniqueName: \"kubernetes.io/projected/7ce82e74-f22d-4720-b298-95d0251583f6-kube-api-access-5kw9q\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.098511 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.104996 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-config" (OuterVolumeSpecName: "config") pod "7ce82e74-f22d-4720-b298-95d0251583f6" (UID: "7ce82e74-f22d-4720-b298-95d0251583f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.200617 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce82e74-f22d-4720-b298-95d0251583f6-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.309722 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" event={"ID":"dd84db06-a743-4726-bd6e-e694e3a17011","Type":"ContainerStarted","Data":"0140b7b1561ffc9cf03bea1e9837ef67a5adf98839444d775f781ada1a51d672"} Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.310272 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.325614 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.325675 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-nrl4d" event={"ID":"7ce82e74-f22d-4720-b298-95d0251583f6","Type":"ContainerDied","Data":"5adee003bacce960adb59b030ed1738b8d4af69eaa57a6cfafda931ba90a869a"} Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.325714 4849 scope.go:117] "RemoveContainer" containerID="5607bf28271c3cbd0ec228e9202c91239c39b2415d4b11ac6d3a7632008e565c" Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.380673 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" podStartSLOduration=3.380648753 podStartE2EDuration="3.380648753s" podCreationTimestamp="2025-12-09 11:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:46:34.341361983 +0000 UTC m=+1176.881246299" watchObservedRunningTime="2025-12-09 11:46:34.380648753 +0000 UTC m=+1176.920533079" Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.428491 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nrl4d"] Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.450189 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nrl4d"] Dec 09 11:46:34 crc kubenswrapper[4849]: I1209 11:46:34.552990 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce82e74-f22d-4720-b298-95d0251583f6" path="/var/lib/kubelet/pods/7ce82e74-f22d-4720-b298-95d0251583f6/volumes" Dec 09 11:46:38 crc kubenswrapper[4849]: I1209 11:46:38.404292 4849 generic.go:334] "Generic (PLEG): container finished" podID="e57021f3-19cd-4765-8f7b-a8cf451bbd70" containerID="9a76b8ed763aa01ba5fbec4d9d92a6b47b5579237428e89fc6780928ccf4db97" exitCode=0 Dec 09 11:46:38 crc kubenswrapper[4849]: I1209 11:46:38.404371 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mvpf9" event={"ID":"e57021f3-19cd-4765-8f7b-a8cf451bbd70","Type":"ContainerDied","Data":"9a76b8ed763aa01ba5fbec4d9d92a6b47b5579237428e89fc6780928ccf4db97"} Dec 09 11:46:41 crc kubenswrapper[4849]: I1209 11:46:41.669009 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:46:41 crc kubenswrapper[4849]: I1209 11:46:41.746454 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4gbqd"] Dec 09 11:46:41 crc kubenswrapper[4849]: I1209 11:46:41.746700 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" podUID="be5d3d9b-b033-4b73-8044-1064dd5d4443" containerName="dnsmasq-dns" containerID="cri-o://6c982557461dadcf37aa08a92a036299bd84eaf2b86871158b690ef960e4af60" gracePeriod=10 Dec 09 11:46:41 crc kubenswrapper[4849]: I1209 11:46:41.966174 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.099982 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-config-data\") pod \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.100104 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpgnt\" (UniqueName: \"kubernetes.io/projected/e57021f3-19cd-4765-8f7b-a8cf451bbd70-kube-api-access-bpgnt\") pod \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.100187 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-fernet-keys\") pod \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.100227 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-combined-ca-bundle\") pod \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.100252 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-credential-keys\") pod \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.100347 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-scripts\") pod \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\" (UID: \"e57021f3-19cd-4765-8f7b-a8cf451bbd70\") " Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.106627 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e57021f3-19cd-4765-8f7b-a8cf451bbd70" (UID: "e57021f3-19cd-4765-8f7b-a8cf451bbd70"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.108180 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e57021f3-19cd-4765-8f7b-a8cf451bbd70" (UID: "e57021f3-19cd-4765-8f7b-a8cf451bbd70"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.124991 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-scripts" (OuterVolumeSpecName: "scripts") pod "e57021f3-19cd-4765-8f7b-a8cf451bbd70" (UID: "e57021f3-19cd-4765-8f7b-a8cf451bbd70"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.129267 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-config-data" (OuterVolumeSpecName: "config-data") pod "e57021f3-19cd-4765-8f7b-a8cf451bbd70" (UID: "e57021f3-19cd-4765-8f7b-a8cf451bbd70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.135347 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57021f3-19cd-4765-8f7b-a8cf451bbd70-kube-api-access-bpgnt" (OuterVolumeSpecName: "kube-api-access-bpgnt") pod "e57021f3-19cd-4765-8f7b-a8cf451bbd70" (UID: "e57021f3-19cd-4765-8f7b-a8cf451bbd70"). InnerVolumeSpecName "kube-api-access-bpgnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.163383 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e57021f3-19cd-4765-8f7b-a8cf451bbd70" (UID: "e57021f3-19cd-4765-8f7b-a8cf451bbd70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.202747 4849 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.202793 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.202807 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpgnt\" (UniqueName: \"kubernetes.io/projected/e57021f3-19cd-4765-8f7b-a8cf451bbd70-kube-api-access-bpgnt\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.202823 4849 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.202835 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.202843 4849 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e57021f3-19cd-4765-8f7b-a8cf451bbd70-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.447794 4849 generic.go:334] "Generic (PLEG): container finished" podID="be5d3d9b-b033-4b73-8044-1064dd5d4443" containerID="6c982557461dadcf37aa08a92a036299bd84eaf2b86871158b690ef960e4af60" exitCode=0 Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.447864 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" event={"ID":"be5d3d9b-b033-4b73-8044-1064dd5d4443","Type":"ContainerDied","Data":"6c982557461dadcf37aa08a92a036299bd84eaf2b86871158b690ef960e4af60"} Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.448992 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mvpf9" event={"ID":"e57021f3-19cd-4765-8f7b-a8cf451bbd70","Type":"ContainerDied","Data":"49e0d3f7e0bb78f27ead8a24bebcb778066aa545fdd2108d6d37f9f7598056be"} Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.449021 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49e0d3f7e0bb78f27ead8a24bebcb778066aa545fdd2108d6d37f9f7598056be" Dec 09 11:46:42 crc kubenswrapper[4849]: I1209 11:46:42.449079 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mvpf9" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.097180 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mvpf9"] Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.103877 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mvpf9"] Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.195075 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tbj8g"] Dec 09 11:46:43 crc kubenswrapper[4849]: E1209 11:46:43.195508 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57021f3-19cd-4765-8f7b-a8cf451bbd70" containerName="keystone-bootstrap" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.195528 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57021f3-19cd-4765-8f7b-a8cf451bbd70" containerName="keystone-bootstrap" Dec 09 11:46:43 crc kubenswrapper[4849]: E1209 11:46:43.195547 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce82e74-f22d-4720-b298-95d0251583f6" containerName="init" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.195554 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce82e74-f22d-4720-b298-95d0251583f6" containerName="init" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.195738 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="e57021f3-19cd-4765-8f7b-a8cf451bbd70" containerName="keystone-bootstrap" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.195779 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce82e74-f22d-4720-b298-95d0251583f6" containerName="init" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.196452 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.199116 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.199334 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.199974 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.200120 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rqcxg" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.200369 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.222371 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tbj8g"] Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.374989 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-config-data\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.375078 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-credential-keys\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.375327 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-fernet-keys\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.375432 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-combined-ca-bundle\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.375494 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgptn\" (UniqueName: \"kubernetes.io/projected/0bc5d74c-7648-4a3a-a858-dc699a6e0389-kube-api-access-sgptn\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.375587 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-scripts\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.476796 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-config-data\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.476910 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-credential-keys\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.476970 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-fernet-keys\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.476997 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-combined-ca-bundle\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.477029 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgptn\" (UniqueName: \"kubernetes.io/projected/0bc5d74c-7648-4a3a-a858-dc699a6e0389-kube-api-access-sgptn\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.477082 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-scripts\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.483504 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-combined-ca-bundle\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.483701 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-fernet-keys\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.484023 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-config-data\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.485328 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-credential-keys\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.488634 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-scripts\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.495201 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgptn\" (UniqueName: \"kubernetes.io/projected/0bc5d74c-7648-4a3a-a858-dc699a6e0389-kube-api-access-sgptn\") pod \"keystone-bootstrap-tbj8g\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:43 crc kubenswrapper[4849]: I1209 11:46:43.518640 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:46:44 crc kubenswrapper[4849]: I1209 11:46:44.507950 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" podUID="be5d3d9b-b033-4b73-8044-1064dd5d4443" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: connect: connection refused" Dec 09 11:46:44 crc kubenswrapper[4849]: I1209 11:46:44.550257 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e57021f3-19cd-4765-8f7b-a8cf451bbd70" path="/var/lib/kubelet/pods/e57021f3-19cd-4765-8f7b-a8cf451bbd70/volumes" Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.507199 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" podUID="be5d3d9b-b033-4b73-8044-1064dd5d4443" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: i/o timeout" Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.566801 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.571858 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" event={"ID":"be5d3d9b-b033-4b73-8044-1064dd5d4443","Type":"ContainerDied","Data":"3b25dec948a988e2bf06b6e0a3a0ac1cca571dca129faf0e1e44f87c4d089b2e"} Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.571911 4849 scope.go:117] "RemoveContainer" containerID="6c982557461dadcf37aa08a92a036299bd84eaf2b86871158b690ef960e4af60" Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.760109 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-ovsdbserver-sb\") pod \"be5d3d9b-b033-4b73-8044-1064dd5d4443\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.760167 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf5qp\" (UniqueName: \"kubernetes.io/projected/be5d3d9b-b033-4b73-8044-1064dd5d4443-kube-api-access-wf5qp\") pod \"be5d3d9b-b033-4b73-8044-1064dd5d4443\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.760207 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-ovsdbserver-nb\") pod \"be5d3d9b-b033-4b73-8044-1064dd5d4443\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.760262 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-config\") pod \"be5d3d9b-b033-4b73-8044-1064dd5d4443\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.760457 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-dns-svc\") pod \"be5d3d9b-b033-4b73-8044-1064dd5d4443\" (UID: \"be5d3d9b-b033-4b73-8044-1064dd5d4443\") " Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.769966 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be5d3d9b-b033-4b73-8044-1064dd5d4443-kube-api-access-wf5qp" (OuterVolumeSpecName: "kube-api-access-wf5qp") pod "be5d3d9b-b033-4b73-8044-1064dd5d4443" (UID: "be5d3d9b-b033-4b73-8044-1064dd5d4443"). InnerVolumeSpecName "kube-api-access-wf5qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.805686 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "be5d3d9b-b033-4b73-8044-1064dd5d4443" (UID: "be5d3d9b-b033-4b73-8044-1064dd5d4443"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.842288 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "be5d3d9b-b033-4b73-8044-1064dd5d4443" (UID: "be5d3d9b-b033-4b73-8044-1064dd5d4443"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.848050 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "be5d3d9b-b033-4b73-8044-1064dd5d4443" (UID: "be5d3d9b-b033-4b73-8044-1064dd5d4443"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.860044 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-config" (OuterVolumeSpecName: "config") pod "be5d3d9b-b033-4b73-8044-1064dd5d4443" (UID: "be5d3d9b-b033-4b73-8044-1064dd5d4443"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.866644 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.866674 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf5qp\" (UniqueName: \"kubernetes.io/projected/be5d3d9b-b033-4b73-8044-1064dd5d4443-kube-api-access-wf5qp\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.866686 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.866696 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:54 crc kubenswrapper[4849]: I1209 11:46:54.866705 4849 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be5d3d9b-b033-4b73-8044-1064dd5d4443-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:55 crc kubenswrapper[4849]: I1209 11:46:55.582360 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" Dec 09 11:46:55 crc kubenswrapper[4849]: I1209 11:46:55.620962 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4gbqd"] Dec 09 11:46:55 crc kubenswrapper[4849]: I1209 11:46:55.629810 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4gbqd"] Dec 09 11:46:55 crc kubenswrapper[4849]: I1209 11:46:55.769529 4849 scope.go:117] "RemoveContainer" containerID="e0679b16f9682dfd91b75d712115ab7b9e1f0d12b19b4459c7175b9df06420b9" Dec 09 11:46:55 crc kubenswrapper[4849]: E1209 11:46:55.813321 4849 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 09 11:46:55 crc kubenswrapper[4849]: E1209 11:46:55.813539 4849 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvlfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-nmgsr_openstack(df8301f3-a405-47fc-b1a8-475daf544079): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 11:46:55 crc kubenswrapper[4849]: E1209 11:46:55.816491 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-nmgsr" podUID="df8301f3-a405-47fc-b1a8-475daf544079" Dec 09 11:46:56 crc kubenswrapper[4849]: W1209 11:46:56.263816 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bc5d74c_7648_4a3a_a858_dc699a6e0389.slice/crio-813eea1f76da6971c766a8a5f2d42d6d55cc37e0245ce82d08ea332d9588ea32 WatchSource:0}: Error finding container 813eea1f76da6971c766a8a5f2d42d6d55cc37e0245ce82d08ea332d9588ea32: Status 404 returned error can't find the container with id 813eea1f76da6971c766a8a5f2d42d6d55cc37e0245ce82d08ea332d9588ea32 Dec 09 11:46:56 crc kubenswrapper[4849]: I1209 11:46:56.266339 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tbj8g"] Dec 09 11:46:56 crc kubenswrapper[4849]: I1209 11:46:56.547719 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be5d3d9b-b033-4b73-8044-1064dd5d4443" path="/var/lib/kubelet/pods/be5d3d9b-b033-4b73-8044-1064dd5d4443/volumes" Dec 09 11:46:56 crc kubenswrapper[4849]: I1209 11:46:56.594084 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tbj8g" event={"ID":"0bc5d74c-7648-4a3a-a858-dc699a6e0389","Type":"ContainerStarted","Data":"2fd9e777f2d8eed9b557fe05e681f4721f7305d52c57341be99a5c250054d1fa"} Dec 09 11:46:56 crc kubenswrapper[4849]: I1209 11:46:56.594476 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tbj8g" event={"ID":"0bc5d74c-7648-4a3a-a858-dc699a6e0389","Type":"ContainerStarted","Data":"813eea1f76da6971c766a8a5f2d42d6d55cc37e0245ce82d08ea332d9588ea32"} Dec 09 11:46:56 crc kubenswrapper[4849]: I1209 11:46:56.599403 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4wpm9" event={"ID":"c9d48847-f667-4f50-b9a1-d68bdf0a63a3","Type":"ContainerStarted","Data":"b528dc5bbc354088f29d6e946d04d92fedb563de87f39d9f2760c6a71675caa5"} Dec 09 11:46:56 crc kubenswrapper[4849]: I1209 11:46:56.602369 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7mnkd" event={"ID":"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944","Type":"ContainerStarted","Data":"95f6d5d6ae0acce5c0a9e51b6358b2217f72e71e3a83695f9b93e3a6826bfcb3"} Dec 09 11:46:56 crc kubenswrapper[4849]: I1209 11:46:56.604514 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75d0bfb8-146b-4d21-8e81-f2cef3d99489","Type":"ContainerStarted","Data":"d737fa8f20ae3532085d512f58419e3598cd420f88a3ababdce2de078f5c00fa"} Dec 09 11:46:56 crc kubenswrapper[4849]: I1209 11:46:56.605808 4849 generic.go:334] "Generic (PLEG): container finished" podID="2c9d5eb2-c2a5-4493-ab04-e8483f1efafe" containerID="576d6927c42e97461923e686de8ef9568980b84c7935cb3adb7eb3ddfbe47f9a" exitCode=0 Dec 09 11:46:56 crc kubenswrapper[4849]: I1209 11:46:56.605892 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5gjhd" event={"ID":"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe","Type":"ContainerDied","Data":"576d6927c42e97461923e686de8ef9568980b84c7935cb3adb7eb3ddfbe47f9a"} Dec 09 11:46:56 crc kubenswrapper[4849]: E1209 11:46:56.610934 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-nmgsr" podUID="df8301f3-a405-47fc-b1a8-475daf544079" Dec 09 11:46:56 crc kubenswrapper[4849]: I1209 11:46:56.622367 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tbj8g" podStartSLOduration=13.622345918 podStartE2EDuration="13.622345918s" podCreationTimestamp="2025-12-09 11:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:46:56.616881112 +0000 UTC m=+1199.156765438" watchObservedRunningTime="2025-12-09 11:46:56.622345918 +0000 UTC m=+1199.162230234" Dec 09 11:46:56 crc kubenswrapper[4849]: I1209 11:46:56.641657 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-7mnkd" podStartSLOduration=2.210346627 podStartE2EDuration="25.641633949s" podCreationTimestamp="2025-12-09 11:46:31 +0000 UTC" firstStartedPulling="2025-12-09 11:46:32.348323505 +0000 UTC m=+1174.888207821" lastFinishedPulling="2025-12-09 11:46:55.779610827 +0000 UTC m=+1198.319495143" observedRunningTime="2025-12-09 11:46:56.632246475 +0000 UTC m=+1199.172130791" watchObservedRunningTime="2025-12-09 11:46:56.641633949 +0000 UTC m=+1199.181518265" Dec 09 11:46:56 crc kubenswrapper[4849]: I1209 11:46:56.659279 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-4wpm9" podStartSLOduration=2.368235687 podStartE2EDuration="25.659256719s" podCreationTimestamp="2025-12-09 11:46:31 +0000 UTC" firstStartedPulling="2025-12-09 11:46:32.487439026 +0000 UTC m=+1175.027323342" lastFinishedPulling="2025-12-09 11:46:55.778460058 +0000 UTC m=+1198.318344374" observedRunningTime="2025-12-09 11:46:56.657346321 +0000 UTC m=+1199.197230637" watchObservedRunningTime="2025-12-09 11:46:56.659256719 +0000 UTC m=+1199.199141045" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.098766 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5gjhd" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.232300 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggrq5\" (UniqueName: \"kubernetes.io/projected/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-kube-api-access-ggrq5\") pod \"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe\" (UID: \"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe\") " Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.232675 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-config\") pod \"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe\" (UID: \"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe\") " Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.232825 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-combined-ca-bundle\") pod \"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe\" (UID: \"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe\") " Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.253821 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-kube-api-access-ggrq5" (OuterVolumeSpecName: "kube-api-access-ggrq5") pod "2c9d5eb2-c2a5-4493-ab04-e8483f1efafe" (UID: "2c9d5eb2-c2a5-4493-ab04-e8483f1efafe"). InnerVolumeSpecName "kube-api-access-ggrq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.262071 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c9d5eb2-c2a5-4493-ab04-e8483f1efafe" (UID: "2c9d5eb2-c2a5-4493-ab04-e8483f1efafe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.282000 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-config" (OuterVolumeSpecName: "config") pod "2c9d5eb2-c2a5-4493-ab04-e8483f1efafe" (UID: "2c9d5eb2-c2a5-4493-ab04-e8483f1efafe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.335052 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.335103 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggrq5\" (UniqueName: \"kubernetes.io/projected/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-kube-api-access-ggrq5\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.335119 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.640310 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5gjhd" event={"ID":"2c9d5eb2-c2a5-4493-ab04-e8483f1efafe","Type":"ContainerDied","Data":"f59fafd75b17c66ba2655ca91e86797102887117c1d3339f2b362d03fc9e3fbc"} Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.640360 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5gjhd" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.640368 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f59fafd75b17c66ba2655ca91e86797102887117c1d3339f2b362d03fc9e3fbc" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.647242 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75d0bfb8-146b-4d21-8e81-f2cef3d99489","Type":"ContainerStarted","Data":"fe801c71078b298ecdbcd0b422e346c62d4a765cf23f377f60d3dfccbfa16066"} Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.963370 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-2lckk"] Dec 09 11:46:58 crc kubenswrapper[4849]: E1209 11:46:58.964322 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c9d5eb2-c2a5-4493-ab04-e8483f1efafe" containerName="neutron-db-sync" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.964338 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c9d5eb2-c2a5-4493-ab04-e8483f1efafe" containerName="neutron-db-sync" Dec 09 11:46:58 crc kubenswrapper[4849]: E1209 11:46:58.964368 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be5d3d9b-b033-4b73-8044-1064dd5d4443" containerName="init" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.964376 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="be5d3d9b-b033-4b73-8044-1064dd5d4443" containerName="init" Dec 09 11:46:58 crc kubenswrapper[4849]: E1209 11:46:58.964426 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be5d3d9b-b033-4b73-8044-1064dd5d4443" containerName="dnsmasq-dns" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.964435 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="be5d3d9b-b033-4b73-8044-1064dd5d4443" containerName="dnsmasq-dns" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.964631 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="be5d3d9b-b033-4b73-8044-1064dd5d4443" containerName="dnsmasq-dns" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.964656 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c9d5eb2-c2a5-4493-ab04-e8483f1efafe" containerName="neutron-db-sync" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.965563 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:58 crc kubenswrapper[4849]: I1209 11:46:58.986910 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-2lckk"] Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.040480 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dbfd748b8-p8g49"] Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.042249 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.050816 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.051161 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.051381 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.069447 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-x68vd" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.085100 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dbfd748b8-p8g49"] Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.152788 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-ovndb-tls-certs\") pod \"neutron-dbfd748b8-p8g49\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.152933 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p8kq\" (UniqueName: \"kubernetes.io/projected/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-kube-api-access-8p8kq\") pod \"dnsmasq-dns-7b946d459c-2lckk\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.152976 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-dns-svc\") pod \"dnsmasq-dns-7b946d459c-2lckk\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.153034 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-2lckk\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.153050 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-httpd-config\") pod \"neutron-dbfd748b8-p8g49\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.153497 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-config\") pod \"dnsmasq-dns-7b946d459c-2lckk\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.153558 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnq4j\" (UniqueName: \"kubernetes.io/projected/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-kube-api-access-qnq4j\") pod \"neutron-dbfd748b8-p8g49\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.153584 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-2lckk\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.153604 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-combined-ca-bundle\") pod \"neutron-dbfd748b8-p8g49\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.155123 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-config\") pod \"neutron-dbfd748b8-p8g49\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.260434 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-dns-svc\") pod \"dnsmasq-dns-7b946d459c-2lckk\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.260538 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-2lckk\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.261668 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-dns-svc\") pod \"dnsmasq-dns-7b946d459c-2lckk\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.262864 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-httpd-config\") pod \"neutron-dbfd748b8-p8g49\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.262892 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-config\") pod \"dnsmasq-dns-7b946d459c-2lckk\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.263311 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnq4j\" (UniqueName: \"kubernetes.io/projected/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-kube-api-access-qnq4j\") pod \"neutron-dbfd748b8-p8g49\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.263342 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-2lckk\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.263366 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-combined-ca-bundle\") pod \"neutron-dbfd748b8-p8g49\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.263452 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-config\") pod \"neutron-dbfd748b8-p8g49\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.263960 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-ovndb-tls-certs\") pod \"neutron-dbfd748b8-p8g49\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.264126 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p8kq\" (UniqueName: \"kubernetes.io/projected/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-kube-api-access-8p8kq\") pod \"dnsmasq-dns-7b946d459c-2lckk\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.264291 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-config\") pod \"dnsmasq-dns-7b946d459c-2lckk\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.264536 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-2lckk\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.264917 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-2lckk\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.270972 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-config\") pod \"neutron-dbfd748b8-p8g49\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.280378 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnq4j\" (UniqueName: \"kubernetes.io/projected/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-kube-api-access-qnq4j\") pod \"neutron-dbfd748b8-p8g49\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.285807 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p8kq\" (UniqueName: \"kubernetes.io/projected/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-kube-api-access-8p8kq\") pod \"dnsmasq-dns-7b946d459c-2lckk\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.295355 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-httpd-config\") pod \"neutron-dbfd748b8-p8g49\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.297341 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-ovndb-tls-certs\") pod \"neutron-dbfd748b8-p8g49\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.298347 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-combined-ca-bundle\") pod \"neutron-dbfd748b8-p8g49\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.311792 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.394374 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.508549 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-4gbqd" podUID="be5d3d9b-b033-4b73-8044-1064dd5d4443" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: i/o timeout" Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.679069 4849 generic.go:334] "Generic (PLEG): container finished" podID="c9d48847-f667-4f50-b9a1-d68bdf0a63a3" containerID="b528dc5bbc354088f29d6e946d04d92fedb563de87f39d9f2760c6a71675caa5" exitCode=0 Dec 09 11:46:59 crc kubenswrapper[4849]: I1209 11:46:59.679132 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4wpm9" event={"ID":"c9d48847-f667-4f50-b9a1-d68bdf0a63a3","Type":"ContainerDied","Data":"b528dc5bbc354088f29d6e946d04d92fedb563de87f39d9f2760c6a71675caa5"} Dec 09 11:47:00 crc kubenswrapper[4849]: I1209 11:47:00.131434 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dbfd748b8-p8g49"] Dec 09 11:47:00 crc kubenswrapper[4849]: I1209 11:47:00.145043 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-2lckk"] Dec 09 11:47:00 crc kubenswrapper[4849]: I1209 11:47:00.714477 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbfd748b8-p8g49" event={"ID":"bfa8b5b6-c9f2-40c6-8e55-b465168d380a","Type":"ContainerStarted","Data":"7d40e523916818e539db57bb074c11b3f102c0113b4871044d570fe9ee49d5e2"} Dec 09 11:47:00 crc kubenswrapper[4849]: I1209 11:47:00.714828 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbfd748b8-p8g49" event={"ID":"bfa8b5b6-c9f2-40c6-8e55-b465168d380a","Type":"ContainerStarted","Data":"d07cbfd9e858b1a55bd73cc285c84bbf4256f7d40706d975ac544c5f8736b69a"} Dec 09 11:47:00 crc kubenswrapper[4849]: I1209 11:47:00.733496 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-2lckk" event={"ID":"f51e65aa-7014-461e-8dce-8fb7aa29d8b7","Type":"ContainerStarted","Data":"60b0e097f18f8a35947a16b336b5739988243bd9f6eda96a6b58d552e5ddce32"} Dec 09 11:47:00 crc kubenswrapper[4849]: I1209 11:47:00.733695 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-2lckk" event={"ID":"f51e65aa-7014-461e-8dce-8fb7aa29d8b7","Type":"ContainerStarted","Data":"1869b32f2738fbb494ec7ff126df1295696631ab53c5d52af03ade9aab5ac035"} Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.284713 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4wpm9" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.379283 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-scripts\") pod \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.379342 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-config-data\") pod \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.379359 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-logs\") pod \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.379431 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmmwq\" (UniqueName: \"kubernetes.io/projected/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-kube-api-access-zmmwq\") pod \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.379566 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-combined-ca-bundle\") pod \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\" (UID: \"c9d48847-f667-4f50-b9a1-d68bdf0a63a3\") " Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.381217 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-logs" (OuterVolumeSpecName: "logs") pod "c9d48847-f667-4f50-b9a1-d68bdf0a63a3" (UID: "c9d48847-f667-4f50-b9a1-d68bdf0a63a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.386026 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-kube-api-access-zmmwq" (OuterVolumeSpecName: "kube-api-access-zmmwq") pod "c9d48847-f667-4f50-b9a1-d68bdf0a63a3" (UID: "c9d48847-f667-4f50-b9a1-d68bdf0a63a3"). InnerVolumeSpecName "kube-api-access-zmmwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.394695 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-scripts" (OuterVolumeSpecName: "scripts") pod "c9d48847-f667-4f50-b9a1-d68bdf0a63a3" (UID: "c9d48847-f667-4f50-b9a1-d68bdf0a63a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.415190 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-config-data" (OuterVolumeSpecName: "config-data") pod "c9d48847-f667-4f50-b9a1-d68bdf0a63a3" (UID: "c9d48847-f667-4f50-b9a1-d68bdf0a63a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.418852 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9d48847-f667-4f50-b9a1-d68bdf0a63a3" (UID: "c9d48847-f667-4f50-b9a1-d68bdf0a63a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.481397 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.481457 4849 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.481470 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.481480 4849 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-logs\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.481492 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmmwq\" (UniqueName: \"kubernetes.io/projected/c9d48847-f667-4f50-b9a1-d68bdf0a63a3-kube-api-access-zmmwq\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.776878 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4wpm9" event={"ID":"c9d48847-f667-4f50-b9a1-d68bdf0a63a3","Type":"ContainerDied","Data":"28470a168281102f91034cf950ba676e5e2f2ba2640fc44c56c79567d18d8013"} Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.777204 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28470a168281102f91034cf950ba676e5e2f2ba2640fc44c56c79567d18d8013" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.777259 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4wpm9" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.803781 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbfd748b8-p8g49" event={"ID":"bfa8b5b6-c9f2-40c6-8e55-b465168d380a","Type":"ContainerStarted","Data":"5d7bf4cd0eaf9819a10fd0db73202046ef04dd4669d2f49149617c7af456ce31"} Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.803846 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.822228 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c689fb97-j4mnm"] Dec 09 11:47:01 crc kubenswrapper[4849]: E1209 11:47:01.822747 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9d48847-f667-4f50-b9a1-d68bdf0a63a3" containerName="placement-db-sync" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.822764 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9d48847-f667-4f50-b9a1-d68bdf0a63a3" containerName="placement-db-sync" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.822917 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9d48847-f667-4f50-b9a1-d68bdf0a63a3" containerName="placement-db-sync" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.825233 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.825454 4849 generic.go:334] "Generic (PLEG): container finished" podID="f51e65aa-7014-461e-8dce-8fb7aa29d8b7" containerID="60b0e097f18f8a35947a16b336b5739988243bd9f6eda96a6b58d552e5ddce32" exitCode=0 Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.825511 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-2lckk" event={"ID":"f51e65aa-7014-461e-8dce-8fb7aa29d8b7","Type":"ContainerDied","Data":"60b0e097f18f8a35947a16b336b5739988243bd9f6eda96a6b58d552e5ddce32"} Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.829758 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.830797 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.846519 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c689fb97-j4mnm"] Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.877124 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dbfd748b8-p8g49" podStartSLOduration=3.8771033360000002 podStartE2EDuration="3.877103336s" podCreationTimestamp="2025-12-09 11:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:47:01.866787728 +0000 UTC m=+1204.406672054" watchObservedRunningTime="2025-12-09 11:47:01.877103336 +0000 UTC m=+1204.416987652" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.947483 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-77d49689b4-4nl2p"] Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.949209 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.960583 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.960940 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.961066 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sstqq" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.961158 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.961183 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.991365 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-ovndb-tls-certs\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.991420 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-config\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.991445 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-httpd-config\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.991514 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-internal-tls-certs\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.991573 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgdfh\" (UniqueName: \"kubernetes.io/projected/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-kube-api-access-mgdfh\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.991622 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-public-tls-certs\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:01 crc kubenswrapper[4849]: I1209 11:47:01.991643 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-combined-ca-bundle\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.055549 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-77d49689b4-4nl2p"] Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.107369 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-config-data\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.107693 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-internal-tls-certs\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.107774 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-public-tls-certs\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.107849 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddd6h\" (UniqueName: \"kubernetes.io/projected/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-kube-api-access-ddd6h\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.107934 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-logs\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.108028 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgdfh\" (UniqueName: \"kubernetes.io/projected/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-kube-api-access-mgdfh\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.108125 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-scripts\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.108215 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-public-tls-certs\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.108289 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-combined-ca-bundle\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.108362 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-internal-tls-certs\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.108778 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-ovndb-tls-certs\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.108861 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-config\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.108903 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-httpd-config\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.108941 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-combined-ca-bundle\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.115908 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-combined-ca-bundle\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.116976 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-internal-tls-certs\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.118355 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-config\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.121049 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-ovndb-tls-certs\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.126845 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-httpd-config\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.127760 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-public-tls-certs\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.137386 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgdfh\" (UniqueName: \"kubernetes.io/projected/8ae2f3e7-3db7-4477-8c03-8c8817fe17d3-kube-api-access-mgdfh\") pod \"neutron-c689fb97-j4mnm\" (UID: \"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3\") " pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.210207 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-scripts\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.210817 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-internal-tls-certs\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.210892 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-combined-ca-bundle\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.210939 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-config-data\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.210974 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-public-tls-certs\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.210995 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddd6h\" (UniqueName: \"kubernetes.io/projected/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-kube-api-access-ddd6h\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.211023 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-logs\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.211371 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-logs\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.218096 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-scripts\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.218274 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-public-tls-certs\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.218390 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-config-data\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.218609 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-internal-tls-certs\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.220948 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-combined-ca-bundle\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.237322 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddd6h\" (UniqueName: \"kubernetes.io/projected/d11a8e2b-c868-44e0-a5a5-56267d22e4b4-kube-api-access-ddd6h\") pod \"placement-77d49689b4-4nl2p\" (UID: \"d11a8e2b-c868-44e0-a5a5-56267d22e4b4\") " pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.300622 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.375124 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.844026 4849 generic.go:334] "Generic (PLEG): container finished" podID="4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944" containerID="95f6d5d6ae0acce5c0a9e51b6358b2217f72e71e3a83695f9b93e3a6826bfcb3" exitCode=0 Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.844100 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7mnkd" event={"ID":"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944","Type":"ContainerDied","Data":"95f6d5d6ae0acce5c0a9e51b6358b2217f72e71e3a83695f9b93e3a6826bfcb3"} Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.853563 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-2lckk" event={"ID":"f51e65aa-7014-461e-8dce-8fb7aa29d8b7","Type":"ContainerStarted","Data":"328131a36a5f6bdff961a66e201b46d4aa296375a912934ff1b0ee3fc02db67e"} Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.853704 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.891708 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b946d459c-2lckk" podStartSLOduration=4.8916870249999995 podStartE2EDuration="4.891687025s" podCreationTimestamp="2025-12-09 11:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:47:02.88424634 +0000 UTC m=+1205.424130656" watchObservedRunningTime="2025-12-09 11:47:02.891687025 +0000 UTC m=+1205.431571351" Dec 09 11:47:02 crc kubenswrapper[4849]: I1209 11:47:02.963187 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c689fb97-j4mnm"] Dec 09 11:47:03 crc kubenswrapper[4849]: I1209 11:47:03.014724 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-77d49689b4-4nl2p"] Dec 09 11:47:03 crc kubenswrapper[4849]: I1209 11:47:03.865726 4849 generic.go:334] "Generic (PLEG): container finished" podID="0bc5d74c-7648-4a3a-a858-dc699a6e0389" containerID="2fd9e777f2d8eed9b557fe05e681f4721f7305d52c57341be99a5c250054d1fa" exitCode=0 Dec 09 11:47:03 crc kubenswrapper[4849]: I1209 11:47:03.865796 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tbj8g" event={"ID":"0bc5d74c-7648-4a3a-a858-dc699a6e0389","Type":"ContainerDied","Data":"2fd9e777f2d8eed9b557fe05e681f4721f7305d52c57341be99a5c250054d1fa"} Dec 09 11:47:03 crc kubenswrapper[4849]: I1209 11:47:03.871137 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77d49689b4-4nl2p" event={"ID":"d11a8e2b-c868-44e0-a5a5-56267d22e4b4","Type":"ContainerStarted","Data":"91b29e3abe93d695cac4c2f010bbaf00199028ac4b9db89cc919944b25b15095"} Dec 09 11:47:03 crc kubenswrapper[4849]: I1209 11:47:03.871179 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77d49689b4-4nl2p" event={"ID":"d11a8e2b-c868-44e0-a5a5-56267d22e4b4","Type":"ContainerStarted","Data":"a219be2a367e90e88ccebfc14ce1ace158bde4f57f25b54d2885c583efc008f2"} Dec 09 11:47:03 crc kubenswrapper[4849]: I1209 11:47:03.871189 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77d49689b4-4nl2p" event={"ID":"d11a8e2b-c868-44e0-a5a5-56267d22e4b4","Type":"ContainerStarted","Data":"b19454e85dd7701c38b1ac090755edd11b10ab9ca1932d9307a1820af774dd53"} Dec 09 11:47:03 crc kubenswrapper[4849]: I1209 11:47:03.872038 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:03 crc kubenswrapper[4849]: I1209 11:47:03.872065 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:03 crc kubenswrapper[4849]: I1209 11:47:03.878222 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c689fb97-j4mnm" event={"ID":"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3","Type":"ContainerStarted","Data":"276494da88c11a6dcfddbcc3cbb5d0a7a40099d3c75d8e14169dd268d2c15b60"} Dec 09 11:47:03 crc kubenswrapper[4849]: I1209 11:47:03.878261 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c689fb97-j4mnm" event={"ID":"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3","Type":"ContainerStarted","Data":"575df6bb1c92bd70f92ae8f830aff6553c51d3f79982836d542608ef652dac50"} Dec 09 11:47:03 crc kubenswrapper[4849]: I1209 11:47:03.878271 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c689fb97-j4mnm" event={"ID":"8ae2f3e7-3db7-4477-8c03-8c8817fe17d3","Type":"ContainerStarted","Data":"34a9e0c808cf3c4f754c280dc549b914a7f4efe2a9fae852f6b550d84374c02d"} Dec 09 11:47:03 crc kubenswrapper[4849]: I1209 11:47:03.878444 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:03 crc kubenswrapper[4849]: I1209 11:47:03.923651 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-77d49689b4-4nl2p" podStartSLOduration=2.923627828 podStartE2EDuration="2.923627828s" podCreationTimestamp="2025-12-09 11:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:47:03.923138807 +0000 UTC m=+1206.463023123" watchObservedRunningTime="2025-12-09 11:47:03.923627828 +0000 UTC m=+1206.463512164" Dec 09 11:47:03 crc kubenswrapper[4849]: I1209 11:47:03.960359 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c689fb97-j4mnm" podStartSLOduration=2.960334434 podStartE2EDuration="2.960334434s" podCreationTimestamp="2025-12-09 11:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:47:03.956794226 +0000 UTC m=+1206.496678562" watchObservedRunningTime="2025-12-09 11:47:03.960334434 +0000 UTC m=+1206.500218750" Dec 09 11:47:06 crc kubenswrapper[4849]: I1209 11:47:06.047893 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7mnkd" Dec 09 11:47:06 crc kubenswrapper[4849]: I1209 11:47:06.215647 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-db-sync-config-data\") pod \"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944\" (UID: \"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944\") " Dec 09 11:47:06 crc kubenswrapper[4849]: I1209 11:47:06.215733 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-combined-ca-bundle\") pod \"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944\" (UID: \"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944\") " Dec 09 11:47:06 crc kubenswrapper[4849]: I1209 11:47:06.215876 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v6bx\" (UniqueName: \"kubernetes.io/projected/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-kube-api-access-9v6bx\") pod \"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944\" (UID: \"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944\") " Dec 09 11:47:06 crc kubenswrapper[4849]: I1209 11:47:06.225724 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944" (UID: "4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:06 crc kubenswrapper[4849]: I1209 11:47:06.248427 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-kube-api-access-9v6bx" (OuterVolumeSpecName: "kube-api-access-9v6bx") pod "4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944" (UID: "4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944"). InnerVolumeSpecName "kube-api-access-9v6bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:47:06 crc kubenswrapper[4849]: I1209 11:47:06.289354 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944" (UID: "4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:06 crc kubenswrapper[4849]: I1209 11:47:06.318205 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v6bx\" (UniqueName: \"kubernetes.io/projected/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-kube-api-access-9v6bx\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:06 crc kubenswrapper[4849]: I1209 11:47:06.318247 4849 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:06 crc kubenswrapper[4849]: I1209 11:47:06.318255 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:06 crc kubenswrapper[4849]: I1209 11:47:06.905636 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7mnkd" event={"ID":"4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944","Type":"ContainerDied","Data":"c52a0d46293d807bb1d96d9c81eca575648bfa27a99cc0712e28a5067a468b5a"} Dec 09 11:47:06 crc kubenswrapper[4849]: I1209 11:47:06.905708 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c52a0d46293d807bb1d96d9c81eca575648bfa27a99cc0712e28a5067a468b5a" Dec 09 11:47:06 crc kubenswrapper[4849]: I1209 11:47:06.905713 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7mnkd" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.430682 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-69b864d45f-rvq8j"] Dec 09 11:47:07 crc kubenswrapper[4849]: E1209 11:47:07.431188 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944" containerName="barbican-db-sync" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.431205 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944" containerName="barbican-db-sync" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.431469 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944" containerName="barbican-db-sync" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.432689 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.452110 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2d75s" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.455989 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-69b864d45f-rvq8j"] Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.466386 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.466906 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.545718 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-846cbdf45b-jlcvr"] Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.547193 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f7b258-dafd-4c17-85d7-457129b212de-combined-ca-bundle\") pod \"barbican-worker-69b864d45f-rvq8j\" (UID: \"54f7b258-dafd-4c17-85d7-457129b212de\") " pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.547256 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54f7b258-dafd-4c17-85d7-457129b212de-logs\") pod \"barbican-worker-69b864d45f-rvq8j\" (UID: \"54f7b258-dafd-4c17-85d7-457129b212de\") " pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.547277 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f7b258-dafd-4c17-85d7-457129b212de-config-data\") pod \"barbican-worker-69b864d45f-rvq8j\" (UID: \"54f7b258-dafd-4c17-85d7-457129b212de\") " pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.547305 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn9rk\" (UniqueName: \"kubernetes.io/projected/54f7b258-dafd-4c17-85d7-457129b212de-kube-api-access-dn9rk\") pod \"barbican-worker-69b864d45f-rvq8j\" (UID: \"54f7b258-dafd-4c17-85d7-457129b212de\") " pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.547341 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54f7b258-dafd-4c17-85d7-457129b212de-config-data-custom\") pod \"barbican-worker-69b864d45f-rvq8j\" (UID: \"54f7b258-dafd-4c17-85d7-457129b212de\") " pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.547704 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.555770 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.580476 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-846cbdf45b-jlcvr"] Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.617518 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-2lckk"] Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.617873 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b946d459c-2lckk" podUID="f51e65aa-7014-461e-8dce-8fb7aa29d8b7" containerName="dnsmasq-dns" containerID="cri-o://328131a36a5f6bdff961a66e201b46d4aa296375a912934ff1b0ee3fc02db67e" gracePeriod=10 Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.619594 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.653007 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54f7b258-dafd-4c17-85d7-457129b212de-config-data-custom\") pod \"barbican-worker-69b864d45f-rvq8j\" (UID: \"54f7b258-dafd-4c17-85d7-457129b212de\") " pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.653158 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b8073bc-e628-45c2-8d54-a455f73261af-config-data-custom\") pod \"barbican-keystone-listener-846cbdf45b-jlcvr\" (UID: \"1b8073bc-e628-45c2-8d54-a455f73261af\") " pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.653302 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8073bc-e628-45c2-8d54-a455f73261af-config-data\") pod \"barbican-keystone-listener-846cbdf45b-jlcvr\" (UID: \"1b8073bc-e628-45c2-8d54-a455f73261af\") " pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.653339 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8073bc-e628-45c2-8d54-a455f73261af-combined-ca-bundle\") pod \"barbican-keystone-listener-846cbdf45b-jlcvr\" (UID: \"1b8073bc-e628-45c2-8d54-a455f73261af\") " pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.653388 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m54mp\" (UniqueName: \"kubernetes.io/projected/1b8073bc-e628-45c2-8d54-a455f73261af-kube-api-access-m54mp\") pod \"barbican-keystone-listener-846cbdf45b-jlcvr\" (UID: \"1b8073bc-e628-45c2-8d54-a455f73261af\") " pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.653468 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f7b258-dafd-4c17-85d7-457129b212de-combined-ca-bundle\") pod \"barbican-worker-69b864d45f-rvq8j\" (UID: \"54f7b258-dafd-4c17-85d7-457129b212de\") " pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.653499 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54f7b258-dafd-4c17-85d7-457129b212de-logs\") pod \"barbican-worker-69b864d45f-rvq8j\" (UID: \"54f7b258-dafd-4c17-85d7-457129b212de\") " pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.653520 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f7b258-dafd-4c17-85d7-457129b212de-config-data\") pod \"barbican-worker-69b864d45f-rvq8j\" (UID: \"54f7b258-dafd-4c17-85d7-457129b212de\") " pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.653564 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn9rk\" (UniqueName: \"kubernetes.io/projected/54f7b258-dafd-4c17-85d7-457129b212de-kube-api-access-dn9rk\") pod \"barbican-worker-69b864d45f-rvq8j\" (UID: \"54f7b258-dafd-4c17-85d7-457129b212de\") " pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.653587 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b8073bc-e628-45c2-8d54-a455f73261af-logs\") pod \"barbican-keystone-listener-846cbdf45b-jlcvr\" (UID: \"1b8073bc-e628-45c2-8d54-a455f73261af\") " pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.664131 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54f7b258-dafd-4c17-85d7-457129b212de-logs\") pod \"barbican-worker-69b864d45f-rvq8j\" (UID: \"54f7b258-dafd-4c17-85d7-457129b212de\") " pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.675678 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-wjf4f"] Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.677552 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.680750 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f7b258-dafd-4c17-85d7-457129b212de-config-data\") pod \"barbican-worker-69b864d45f-rvq8j\" (UID: \"54f7b258-dafd-4c17-85d7-457129b212de\") " pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.692987 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54f7b258-dafd-4c17-85d7-457129b212de-config-data-custom\") pod \"barbican-worker-69b864d45f-rvq8j\" (UID: \"54f7b258-dafd-4c17-85d7-457129b212de\") " pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.698784 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f7b258-dafd-4c17-85d7-457129b212de-combined-ca-bundle\") pod \"barbican-worker-69b864d45f-rvq8j\" (UID: \"54f7b258-dafd-4c17-85d7-457129b212de\") " pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.700134 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn9rk\" (UniqueName: \"kubernetes.io/projected/54f7b258-dafd-4c17-85d7-457129b212de-kube-api-access-dn9rk\") pod \"barbican-worker-69b864d45f-rvq8j\" (UID: \"54f7b258-dafd-4c17-85d7-457129b212de\") " pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.715799 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-wjf4f"] Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.762316 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m54mp\" (UniqueName: \"kubernetes.io/projected/1b8073bc-e628-45c2-8d54-a455f73261af-kube-api-access-m54mp\") pod \"barbican-keystone-listener-846cbdf45b-jlcvr\" (UID: \"1b8073bc-e628-45c2-8d54-a455f73261af\") " pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.762422 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b8073bc-e628-45c2-8d54-a455f73261af-logs\") pod \"barbican-keystone-listener-846cbdf45b-jlcvr\" (UID: \"1b8073bc-e628-45c2-8d54-a455f73261af\") " pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.762477 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b8073bc-e628-45c2-8d54-a455f73261af-config-data-custom\") pod \"barbican-keystone-listener-846cbdf45b-jlcvr\" (UID: \"1b8073bc-e628-45c2-8d54-a455f73261af\") " pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.762542 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8073bc-e628-45c2-8d54-a455f73261af-config-data\") pod \"barbican-keystone-listener-846cbdf45b-jlcvr\" (UID: \"1b8073bc-e628-45c2-8d54-a455f73261af\") " pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.762567 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8073bc-e628-45c2-8d54-a455f73261af-combined-ca-bundle\") pod \"barbican-keystone-listener-846cbdf45b-jlcvr\" (UID: \"1b8073bc-e628-45c2-8d54-a455f73261af\") " pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.763278 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b8073bc-e628-45c2-8d54-a455f73261af-logs\") pod \"barbican-keystone-listener-846cbdf45b-jlcvr\" (UID: \"1b8073bc-e628-45c2-8d54-a455f73261af\") " pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.767090 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b8073bc-e628-45c2-8d54-a455f73261af-config-data-custom\") pod \"barbican-keystone-listener-846cbdf45b-jlcvr\" (UID: \"1b8073bc-e628-45c2-8d54-a455f73261af\") " pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.770681 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8073bc-e628-45c2-8d54-a455f73261af-combined-ca-bundle\") pod \"barbican-keystone-listener-846cbdf45b-jlcvr\" (UID: \"1b8073bc-e628-45c2-8d54-a455f73261af\") " pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.793492 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8073bc-e628-45c2-8d54-a455f73261af-config-data\") pod \"barbican-keystone-listener-846cbdf45b-jlcvr\" (UID: \"1b8073bc-e628-45c2-8d54-a455f73261af\") " pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.807892 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-69b864d45f-rvq8j" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.865832 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-dns-svc\") pod \"dnsmasq-dns-6bb684768f-wjf4f\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.865946 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-config\") pod \"dnsmasq-dns-6bb684768f-wjf4f\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.866012 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-wjf4f\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.866040 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2pjt\" (UniqueName: \"kubernetes.io/projected/e42c2434-f83d-4531-9f19-073674ff63dd-kube-api-access-w2pjt\") pod \"dnsmasq-dns-6bb684768f-wjf4f\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.866080 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-wjf4f\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.875219 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m54mp\" (UniqueName: \"kubernetes.io/projected/1b8073bc-e628-45c2-8d54-a455f73261af-kube-api-access-m54mp\") pod \"barbican-keystone-listener-846cbdf45b-jlcvr\" (UID: \"1b8073bc-e628-45c2-8d54-a455f73261af\") " pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.892880 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.968104 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-config\") pod \"dnsmasq-dns-6bb684768f-wjf4f\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.968618 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-wjf4f\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.968754 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2pjt\" (UniqueName: \"kubernetes.io/projected/e42c2434-f83d-4531-9f19-073674ff63dd-kube-api-access-w2pjt\") pod \"dnsmasq-dns-6bb684768f-wjf4f\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.968886 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-wjf4f\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.969126 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-dns-svc\") pod \"dnsmasq-dns-6bb684768f-wjf4f\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.970342 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-dns-svc\") pod \"dnsmasq-dns-6bb684768f-wjf4f\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.971268 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-wjf4f\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.971772 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-wjf4f\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:07 crc kubenswrapper[4849]: I1209 11:47:07.971827 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-config\") pod \"dnsmasq-dns-6bb684768f-wjf4f\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.063554 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2pjt\" (UniqueName: \"kubernetes.io/projected/e42c2434-f83d-4531-9f19-073674ff63dd-kube-api-access-w2pjt\") pod \"dnsmasq-dns-6bb684768f-wjf4f\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.236225 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-c58b6b59d-hkfg5"] Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.237754 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.240790 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.259720 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.304003 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c58b6b59d-hkfg5"] Dec 09 11:47:08 crc kubenswrapper[4849]: E1209 11:47:08.333444 4849 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf51e65aa_7014_461e_8dce_8fb7aa29d8b7.slice/crio-328131a36a5f6bdff961a66e201b46d4aa296375a912934ff1b0ee3fc02db67e.scope\": RecentStats: unable to find data in memory cache]" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.377952 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2e91051-7b71-474b-a57d-c482d68f96b5-logs\") pod \"barbican-api-c58b6b59d-hkfg5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.378237 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-config-data-custom\") pod \"barbican-api-c58b6b59d-hkfg5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.378349 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-config-data\") pod \"barbican-api-c58b6b59d-hkfg5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.378374 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-combined-ca-bundle\") pod \"barbican-api-c58b6b59d-hkfg5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.378479 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg9pc\" (UniqueName: \"kubernetes.io/projected/c2e91051-7b71-474b-a57d-c482d68f96b5-kube-api-access-wg9pc\") pod \"barbican-api-c58b6b59d-hkfg5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.480107 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-combined-ca-bundle\") pod \"barbican-api-c58b6b59d-hkfg5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.480162 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-config-data\") pod \"barbican-api-c58b6b59d-hkfg5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.481101 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg9pc\" (UniqueName: \"kubernetes.io/projected/c2e91051-7b71-474b-a57d-c482d68f96b5-kube-api-access-wg9pc\") pod \"barbican-api-c58b6b59d-hkfg5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.481215 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2e91051-7b71-474b-a57d-c482d68f96b5-logs\") pod \"barbican-api-c58b6b59d-hkfg5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.481270 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-config-data-custom\") pod \"barbican-api-c58b6b59d-hkfg5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.482052 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2e91051-7b71-474b-a57d-c482d68f96b5-logs\") pod \"barbican-api-c58b6b59d-hkfg5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.486689 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-config-data\") pod \"barbican-api-c58b6b59d-hkfg5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.486902 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-combined-ca-bundle\") pod \"barbican-api-c58b6b59d-hkfg5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.487184 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-config-data-custom\") pod \"barbican-api-c58b6b59d-hkfg5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.504919 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg9pc\" (UniqueName: \"kubernetes.io/projected/c2e91051-7b71-474b-a57d-c482d68f96b5-kube-api-access-wg9pc\") pod \"barbican-api-c58b6b59d-hkfg5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:08 crc kubenswrapper[4849]: I1209 11:47:08.560987 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:09 crc kubenswrapper[4849]: I1209 11:47:09.313364 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b946d459c-2lckk" podUID="f51e65aa-7014-461e-8dce-8fb7aa29d8b7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Dec 09 11:47:09 crc kubenswrapper[4849]: I1209 11:47:09.976139 4849 generic.go:334] "Generic (PLEG): container finished" podID="f51e65aa-7014-461e-8dce-8fb7aa29d8b7" containerID="328131a36a5f6bdff961a66e201b46d4aa296375a912934ff1b0ee3fc02db67e" exitCode=0 Dec 09 11:47:09 crc kubenswrapper[4849]: I1209 11:47:09.976180 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-2lckk" event={"ID":"f51e65aa-7014-461e-8dce-8fb7aa29d8b7","Type":"ContainerDied","Data":"328131a36a5f6bdff961a66e201b46d4aa296375a912934ff1b0ee3fc02db67e"} Dec 09 11:47:10 crc kubenswrapper[4849]: I1209 11:47:10.886644 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b9f4d9d4-l4khd"] Dec 09 11:47:10 crc kubenswrapper[4849]: I1209 11:47:10.888378 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:10 crc kubenswrapper[4849]: I1209 11:47:10.915300 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 09 11:47:10 crc kubenswrapper[4849]: I1209 11:47:10.917250 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 09 11:47:10 crc kubenswrapper[4849]: I1209 11:47:10.917521 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b9f4d9d4-l4khd"] Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.031392 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2v9h\" (UniqueName: \"kubernetes.io/projected/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-kube-api-access-r2v9h\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.031759 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-combined-ca-bundle\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.031791 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-logs\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.031826 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-config-data-custom\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.031845 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-public-tls-certs\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.031925 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-config-data\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.031952 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-internal-tls-certs\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.133082 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-config-data\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.133151 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-internal-tls-certs\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.133205 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2v9h\" (UniqueName: \"kubernetes.io/projected/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-kube-api-access-r2v9h\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.133232 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-combined-ca-bundle\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.133263 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-logs\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.133297 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-config-data-custom\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.133322 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-public-tls-certs\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.134223 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-logs\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.165267 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-combined-ca-bundle\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.166238 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-config-data\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.173042 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2v9h\" (UniqueName: \"kubernetes.io/projected/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-kube-api-access-r2v9h\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.184435 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-config-data-custom\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.201036 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-internal-tls-certs\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.204558 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d4aeb9-b28b-4315-9bbf-aab0e5247d9a-public-tls-certs\") pod \"barbican-api-7b9f4d9d4-l4khd\" (UID: \"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a\") " pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.218626 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.755359 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.845115 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-fernet-keys\") pod \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.845430 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgptn\" (UniqueName: \"kubernetes.io/projected/0bc5d74c-7648-4a3a-a858-dc699a6e0389-kube-api-access-sgptn\") pod \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.845546 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-credential-keys\") pod \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.845563 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-combined-ca-bundle\") pod \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.845605 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-scripts\") pod \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.845702 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-config-data\") pod \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\" (UID: \"0bc5d74c-7648-4a3a-a858-dc699a6e0389\") " Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.860805 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0bc5d74c-7648-4a3a-a858-dc699a6e0389" (UID: "0bc5d74c-7648-4a3a-a858-dc699a6e0389"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.864212 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-scripts" (OuterVolumeSpecName: "scripts") pod "0bc5d74c-7648-4a3a-a858-dc699a6e0389" (UID: "0bc5d74c-7648-4a3a-a858-dc699a6e0389"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.874466 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0bc5d74c-7648-4a3a-a858-dc699a6e0389" (UID: "0bc5d74c-7648-4a3a-a858-dc699a6e0389"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.880139 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc5d74c-7648-4a3a-a858-dc699a6e0389-kube-api-access-sgptn" (OuterVolumeSpecName: "kube-api-access-sgptn") pod "0bc5d74c-7648-4a3a-a858-dc699a6e0389" (UID: "0bc5d74c-7648-4a3a-a858-dc699a6e0389"). InnerVolumeSpecName "kube-api-access-sgptn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.911465 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bc5d74c-7648-4a3a-a858-dc699a6e0389" (UID: "0bc5d74c-7648-4a3a-a858-dc699a6e0389"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.948677 4849 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.948736 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.948750 4849 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.948761 4849 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:11 crc kubenswrapper[4849]: I1209 11:47:11.948773 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgptn\" (UniqueName: \"kubernetes.io/projected/0bc5d74c-7648-4a3a-a858-dc699a6e0389-kube-api-access-sgptn\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.023144 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-config-data" (OuterVolumeSpecName: "config-data") pod "0bc5d74c-7648-4a3a-a858-dc699a6e0389" (UID: "0bc5d74c-7648-4a3a-a858-dc699a6e0389"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.055255 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc5d74c-7648-4a3a-a858-dc699a6e0389-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.101967 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tbj8g" event={"ID":"0bc5d74c-7648-4a3a-a858-dc699a6e0389","Type":"ContainerDied","Data":"813eea1f76da6971c766a8a5f2d42d6d55cc37e0245ce82d08ea332d9588ea32"} Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.102013 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="813eea1f76da6971c766a8a5f2d42d6d55cc37e0245ce82d08ea332d9588ea32" Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.102083 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tbj8g" Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.463972 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:47:12 crc kubenswrapper[4849]: W1209 11:47:12.574950 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42c2434_f83d_4531_9f19_073674ff63dd.slice/crio-bb93c2ff5a6e0a9a6c731fe9c8cd0673707c69571d6b9b5146b72626375c5db7 WatchSource:0}: Error finding container bb93c2ff5a6e0a9a6c731fe9c8cd0673707c69571d6b9b5146b72626375c5db7: Status 404 returned error can't find the container with id bb93c2ff5a6e0a9a6c731fe9c8cd0673707c69571d6b9b5146b72626375c5db7 Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.587515 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-ovsdbserver-sb\") pod \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.587586 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p8kq\" (UniqueName: \"kubernetes.io/projected/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-kube-api-access-8p8kq\") pod \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.587703 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-dns-svc\") pod \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.587747 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-ovsdbserver-nb\") pod \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.587776 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-config\") pod \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\" (UID: \"f51e65aa-7014-461e-8dce-8fb7aa29d8b7\") " Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.605644 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-846cbdf45b-jlcvr"] Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.623762 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-wjf4f"] Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.651355 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-kube-api-access-8p8kq" (OuterVolumeSpecName: "kube-api-access-8p8kq") pod "f51e65aa-7014-461e-8dce-8fb7aa29d8b7" (UID: "f51e65aa-7014-461e-8dce-8fb7aa29d8b7"). InnerVolumeSpecName "kube-api-access-8p8kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.700889 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f51e65aa-7014-461e-8dce-8fb7aa29d8b7" (UID: "f51e65aa-7014-461e-8dce-8fb7aa29d8b7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.705546 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.705605 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p8kq\" (UniqueName: \"kubernetes.io/projected/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-kube-api-access-8p8kq\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.724513 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-config" (OuterVolumeSpecName: "config") pod "f51e65aa-7014-461e-8dce-8fb7aa29d8b7" (UID: "f51e65aa-7014-461e-8dce-8fb7aa29d8b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.757030 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b9f4d9d4-l4khd"] Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.759972 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f51e65aa-7014-461e-8dce-8fb7aa29d8b7" (UID: "f51e65aa-7014-461e-8dce-8fb7aa29d8b7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:47:12 crc kubenswrapper[4849]: W1209 11:47:12.764106 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90d4aeb9_b28b_4315_9bbf_aab0e5247d9a.slice/crio-83f7dde274eeb5e1c2416b973196789356ddc9a853af25d8478ec789094c2332 WatchSource:0}: Error finding container 83f7dde274eeb5e1c2416b973196789356ddc9a853af25d8478ec789094c2332: Status 404 returned error can't find the container with id 83f7dde274eeb5e1c2416b973196789356ddc9a853af25d8478ec789094c2332 Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.769046 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f51e65aa-7014-461e-8dce-8fb7aa29d8b7" (UID: "f51e65aa-7014-461e-8dce-8fb7aa29d8b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.831897 4849 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.831938 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.831954 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51e65aa-7014-461e-8dce-8fb7aa29d8b7-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:12 crc kubenswrapper[4849]: I1209 11:47:12.877473 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c58b6b59d-hkfg5"] Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.005678 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-76d4cfc555-fqqzj"] Dec 09 11:47:13 crc kubenswrapper[4849]: E1209 11:47:13.006355 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51e65aa-7014-461e-8dce-8fb7aa29d8b7" containerName="dnsmasq-dns" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.006374 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51e65aa-7014-461e-8dce-8fb7aa29d8b7" containerName="dnsmasq-dns" Dec 09 11:47:13 crc kubenswrapper[4849]: E1209 11:47:13.006395 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51e65aa-7014-461e-8dce-8fb7aa29d8b7" containerName="init" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.006402 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51e65aa-7014-461e-8dce-8fb7aa29d8b7" containerName="init" Dec 09 11:47:13 crc kubenswrapper[4849]: E1209 11:47:13.006627 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc5d74c-7648-4a3a-a858-dc699a6e0389" containerName="keystone-bootstrap" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.006638 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc5d74c-7648-4a3a-a858-dc699a6e0389" containerName="keystone-bootstrap" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.006822 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc5d74c-7648-4a3a-a858-dc699a6e0389" containerName="keystone-bootstrap" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.006845 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51e65aa-7014-461e-8dce-8fb7aa29d8b7" containerName="dnsmasq-dns" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.007563 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.018080 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.018112 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.018400 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rqcxg" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.018549 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.019068 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.019213 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.036962 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-scripts\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.037127 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl9rv\" (UniqueName: \"kubernetes.io/projected/f07ea8eb-8b14-491f-bf4a-f7409628ae82-kube-api-access-wl9rv\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.037191 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-combined-ca-bundle\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.037249 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-public-tls-certs\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.037313 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-fernet-keys\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.037344 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-internal-tls-certs\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.037382 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-config-data\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.037421 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-credential-keys\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.041957 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-69b864d45f-rvq8j"] Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.048454 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76d4cfc555-fqqzj"] Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.111587 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" event={"ID":"e42c2434-f83d-4531-9f19-073674ff63dd","Type":"ContainerStarted","Data":"bb93c2ff5a6e0a9a6c731fe9c8cd0673707c69571d6b9b5146b72626375c5db7"} Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.114259 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" event={"ID":"1b8073bc-e628-45c2-8d54-a455f73261af","Type":"ContainerStarted","Data":"798a957b796ba13b6048f77d2d238493678e6a6c625a53f35f06be4c5a48abeb"} Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.116115 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-2lckk" event={"ID":"f51e65aa-7014-461e-8dce-8fb7aa29d8b7","Type":"ContainerDied","Data":"1869b32f2738fbb494ec7ff126df1295696631ab53c5d52af03ade9aab5ac035"} Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.116175 4849 scope.go:117] "RemoveContainer" containerID="328131a36a5f6bdff961a66e201b46d4aa296375a912934ff1b0ee3fc02db67e" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.116340 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-2lckk" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.125212 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75d0bfb8-146b-4d21-8e81-f2cef3d99489","Type":"ContainerStarted","Data":"c8d08fbfc9bdd67b0ee3fc32004f074de1c0f231de61755e58dbe18cbfd14b26"} Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.135472 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c58b6b59d-hkfg5" event={"ID":"c2e91051-7b71-474b-a57d-c482d68f96b5","Type":"ContainerStarted","Data":"96067efea12d4bb4421a01efd1c4b964370b4a24c2f328965dc1845a4ca46a46"} Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.138153 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-scripts\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.138255 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl9rv\" (UniqueName: \"kubernetes.io/projected/f07ea8eb-8b14-491f-bf4a-f7409628ae82-kube-api-access-wl9rv\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.138312 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-combined-ca-bundle\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.138365 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-public-tls-certs\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.138393 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-fernet-keys\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.138437 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-internal-tls-certs\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.138468 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-config-data\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.138490 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-credential-keys\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.141891 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b9f4d9d4-l4khd" event={"ID":"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a","Type":"ContainerStarted","Data":"83f7dde274eeb5e1c2416b973196789356ddc9a853af25d8478ec789094c2332"} Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.143133 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-scripts\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.143210 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-credential-keys\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.146193 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-public-tls-certs\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.146273 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-69b864d45f-rvq8j" event={"ID":"54f7b258-dafd-4c17-85d7-457129b212de","Type":"ContainerStarted","Data":"9aabbb009845690e4ead7a97078cd02fe20d9671542c898078c1743fca7b73dd"} Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.146691 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-internal-tls-certs\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.151633 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-fernet-keys\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.156262 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-combined-ca-bundle\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.156310 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f07ea8eb-8b14-491f-bf4a-f7409628ae82-config-data\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.188454 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl9rv\" (UniqueName: \"kubernetes.io/projected/f07ea8eb-8b14-491f-bf4a-f7409628ae82-kube-api-access-wl9rv\") pod \"keystone-76d4cfc555-fqqzj\" (UID: \"f07ea8eb-8b14-491f-bf4a-f7409628ae82\") " pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.273525 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-2lckk"] Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.281956 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-2lckk"] Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.329686 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.353202 4849 scope.go:117] "RemoveContainer" containerID="60b0e097f18f8a35947a16b336b5739988243bd9f6eda96a6b58d552e5ddce32" Dec 09 11:47:13 crc kubenswrapper[4849]: I1209 11:47:13.894710 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76d4cfc555-fqqzj"] Dec 09 11:47:13 crc kubenswrapper[4849]: W1209 11:47:13.915718 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf07ea8eb_8b14_491f_bf4a_f7409628ae82.slice/crio-df46c17abca778ef8c20c8be63c7903c2e2910ab2c525b2d33d78662e54bb65a WatchSource:0}: Error finding container df46c17abca778ef8c20c8be63c7903c2e2910ab2c525b2d33d78662e54bb65a: Status 404 returned error can't find the container with id df46c17abca778ef8c20c8be63c7903c2e2910ab2c525b2d33d78662e54bb65a Dec 09 11:47:14 crc kubenswrapper[4849]: I1209 11:47:14.283587 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c58b6b59d-hkfg5" event={"ID":"c2e91051-7b71-474b-a57d-c482d68f96b5","Type":"ContainerStarted","Data":"1c4b09c1c9daeca1dc5380ded48ab60c3f888724152107f19e9e536fb33fa9bd"} Dec 09 11:47:14 crc kubenswrapper[4849]: I1209 11:47:14.283970 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c58b6b59d-hkfg5" event={"ID":"c2e91051-7b71-474b-a57d-c482d68f96b5","Type":"ContainerStarted","Data":"c2c9ed36f83d936d97d64ab1b6a6b9f900a08bba66b497a699d7ee0881a3439d"} Dec 09 11:47:14 crc kubenswrapper[4849]: I1209 11:47:14.283994 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:14 crc kubenswrapper[4849]: I1209 11:47:14.284021 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:14 crc kubenswrapper[4849]: I1209 11:47:14.288676 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76d4cfc555-fqqzj" event={"ID":"f07ea8eb-8b14-491f-bf4a-f7409628ae82","Type":"ContainerStarted","Data":"df46c17abca778ef8c20c8be63c7903c2e2910ab2c525b2d33d78662e54bb65a"} Dec 09 11:47:14 crc kubenswrapper[4849]: I1209 11:47:14.313758 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b9f4d9d4-l4khd" event={"ID":"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a","Type":"ContainerStarted","Data":"42857042b44e5d0b0b8e557b27c5b66d91a349a39e09bb0a27b82a9aadc42109"} Dec 09 11:47:14 crc kubenswrapper[4849]: I1209 11:47:14.313840 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b9f4d9d4-l4khd" event={"ID":"90d4aeb9-b28b-4315-9bbf-aab0e5247d9a","Type":"ContainerStarted","Data":"95e6b10e1645663396bb86f2d885cdf7b1ca8f2b959438b3ee46267f7b5afe64"} Dec 09 11:47:14 crc kubenswrapper[4849]: I1209 11:47:14.314680 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:14 crc kubenswrapper[4849]: I1209 11:47:14.314710 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:14 crc kubenswrapper[4849]: I1209 11:47:14.329895 4849 generic.go:334] "Generic (PLEG): container finished" podID="e42c2434-f83d-4531-9f19-073674ff63dd" containerID="657c304153c50fb2de36f24c8ca2f9984221c4ac30b663e230b25d9408d958d7" exitCode=0 Dec 09 11:47:14 crc kubenswrapper[4849]: I1209 11:47:14.329974 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" event={"ID":"e42c2434-f83d-4531-9f19-073674ff63dd","Type":"ContainerDied","Data":"657c304153c50fb2de36f24c8ca2f9984221c4ac30b663e230b25d9408d958d7"} Dec 09 11:47:14 crc kubenswrapper[4849]: I1209 11:47:14.392910 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-c58b6b59d-hkfg5" podStartSLOduration=6.392886839 podStartE2EDuration="6.392886839s" podCreationTimestamp="2025-12-09 11:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:47:14.334854171 +0000 UTC m=+1216.874738497" watchObservedRunningTime="2025-12-09 11:47:14.392886839 +0000 UTC m=+1216.932771165" Dec 09 11:47:14 crc kubenswrapper[4849]: I1209 11:47:14.451225 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b9f4d9d4-l4khd" podStartSLOduration=4.451191264 podStartE2EDuration="4.451191264s" podCreationTimestamp="2025-12-09 11:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:47:14.450782014 +0000 UTC m=+1216.990666350" watchObservedRunningTime="2025-12-09 11:47:14.451191264 +0000 UTC m=+1216.991075580" Dec 09 11:47:14 crc kubenswrapper[4849]: I1209 11:47:14.573254 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51e65aa-7014-461e-8dce-8fb7aa29d8b7" path="/var/lib/kubelet/pods/f51e65aa-7014-461e-8dce-8fb7aa29d8b7/volumes" Dec 09 11:47:15 crc kubenswrapper[4849]: I1209 11:47:15.356337 4849 generic.go:334] "Generic (PLEG): container finished" podID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerID="1c4b09c1c9daeca1dc5380ded48ab60c3f888724152107f19e9e536fb33fa9bd" exitCode=1 Dec 09 11:47:15 crc kubenswrapper[4849]: I1209 11:47:15.356627 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c58b6b59d-hkfg5" event={"ID":"c2e91051-7b71-474b-a57d-c482d68f96b5","Type":"ContainerDied","Data":"1c4b09c1c9daeca1dc5380ded48ab60c3f888724152107f19e9e536fb33fa9bd"} Dec 09 11:47:15 crc kubenswrapper[4849]: I1209 11:47:15.357488 4849 scope.go:117] "RemoveContainer" containerID="1c4b09c1c9daeca1dc5380ded48ab60c3f888724152107f19e9e536fb33fa9bd" Dec 09 11:47:15 crc kubenswrapper[4849]: I1209 11:47:15.360083 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76d4cfc555-fqqzj" event={"ID":"f07ea8eb-8b14-491f-bf4a-f7409628ae82","Type":"ContainerStarted","Data":"7fea05e5bec15f1d5b488a9c10b9db36d575b9d66a8ac3d205246e09b4d461a5"} Dec 09 11:47:15 crc kubenswrapper[4849]: I1209 11:47:15.360282 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:15 crc kubenswrapper[4849]: I1209 11:47:15.364370 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmgsr" event={"ID":"df8301f3-a405-47fc-b1a8-475daf544079","Type":"ContainerStarted","Data":"2e1be5b125c60b0aba9b126959aadd9e4d47ed2cd5d0da84ff0030d34c9afccc"} Dec 09 11:47:15 crc kubenswrapper[4849]: I1209 11:47:15.447231 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-76d4cfc555-fqqzj" podStartSLOduration=3.44720995 podStartE2EDuration="3.44720995s" podCreationTimestamp="2025-12-09 11:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:47:15.39828885 +0000 UTC m=+1217.938173166" watchObservedRunningTime="2025-12-09 11:47:15.44720995 +0000 UTC m=+1217.987094266" Dec 09 11:47:15 crc kubenswrapper[4849]: I1209 11:47:15.454968 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-nmgsr" podStartSLOduration=4.260224168 podStartE2EDuration="45.454947094s" podCreationTimestamp="2025-12-09 11:46:30 +0000 UTC" firstStartedPulling="2025-12-09 11:46:32.185314886 +0000 UTC m=+1174.725199202" lastFinishedPulling="2025-12-09 11:47:13.380037812 +0000 UTC m=+1215.919922128" observedRunningTime="2025-12-09 11:47:15.424839562 +0000 UTC m=+1217.964723878" watchObservedRunningTime="2025-12-09 11:47:15.454947094 +0000 UTC m=+1217.994831410" Dec 09 11:47:17 crc kubenswrapper[4849]: I1209 11:47:17.562030 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:18 crc kubenswrapper[4849]: I1209 11:47:18.408125 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" event={"ID":"e42c2434-f83d-4531-9f19-073674ff63dd","Type":"ContainerStarted","Data":"8047e1839aca05bd7942932df7303f95e51b37365cc63cd33340942c924a207a"} Dec 09 11:47:18 crc kubenswrapper[4849]: I1209 11:47:18.408976 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:18 crc kubenswrapper[4849]: I1209 11:47:18.436537 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" podStartSLOduration=11.436520392 podStartE2EDuration="11.436520392s" podCreationTimestamp="2025-12-09 11:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:47:18.433895547 +0000 UTC m=+1220.973779863" watchObservedRunningTime="2025-12-09 11:47:18.436520392 +0000 UTC m=+1220.976404708" Dec 09 11:47:18 crc kubenswrapper[4849]: I1209 11:47:18.567707 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-c58b6b59d-hkfg5" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": dial tcp 10.217.0.145:9311: connect: connection refused" Dec 09 11:47:19 crc kubenswrapper[4849]: I1209 11:47:19.421110 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c58b6b59d-hkfg5" event={"ID":"c2e91051-7b71-474b-a57d-c482d68f96b5","Type":"ContainerStarted","Data":"886f8f3b43b0a6c4e7a2d395075a1028f36ff6e46b41d8226c9d8345bd425562"} Dec 09 11:47:19 crc kubenswrapper[4849]: I1209 11:47:19.421458 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:19 crc kubenswrapper[4849]: I1209 11:47:19.421635 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-c58b6b59d-hkfg5" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": dial tcp 10.217.0.145:9311: connect: connection refused" Dec 09 11:47:19 crc kubenswrapper[4849]: I1209 11:47:19.424199 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-69b864d45f-rvq8j" event={"ID":"54f7b258-dafd-4c17-85d7-457129b212de","Type":"ContainerStarted","Data":"c280022b1b039fb730bd2e4c9025b469d93c6bf83c300117e9e4ae871ecd4522"} Dec 09 11:47:19 crc kubenswrapper[4849]: I1209 11:47:19.431137 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" event={"ID":"1b8073bc-e628-45c2-8d54-a455f73261af","Type":"ContainerStarted","Data":"6199d58c51d717a93a019be02cf6574710d754568fa181d7fb8f2c19b501121c"} Dec 09 11:47:20 crc kubenswrapper[4849]: I1209 11:47:20.439939 4849 generic.go:334] "Generic (PLEG): container finished" podID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerID="886f8f3b43b0a6c4e7a2d395075a1028f36ff6e46b41d8226c9d8345bd425562" exitCode=1 Dec 09 11:47:20 crc kubenswrapper[4849]: I1209 11:47:20.440130 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c58b6b59d-hkfg5" event={"ID":"c2e91051-7b71-474b-a57d-c482d68f96b5","Type":"ContainerDied","Data":"886f8f3b43b0a6c4e7a2d395075a1028f36ff6e46b41d8226c9d8345bd425562"} Dec 09 11:47:20 crc kubenswrapper[4849]: I1209 11:47:20.440246 4849 scope.go:117] "RemoveContainer" containerID="1c4b09c1c9daeca1dc5380ded48ab60c3f888724152107f19e9e536fb33fa9bd" Dec 09 11:47:20 crc kubenswrapper[4849]: I1209 11:47:20.440533 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-c58b6b59d-hkfg5" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": dial tcp 10.217.0.145:9311: connect: connection refused" Dec 09 11:47:20 crc kubenswrapper[4849]: I1209 11:47:20.440935 4849 scope.go:117] "RemoveContainer" containerID="886f8f3b43b0a6c4e7a2d395075a1028f36ff6e46b41d8226c9d8345bd425562" Dec 09 11:47:20 crc kubenswrapper[4849]: E1209 11:47:20.442400 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=barbican-api pod=barbican-api-c58b6b59d-hkfg5_openstack(c2e91051-7b71-474b-a57d-c482d68f96b5)\"" pod="openstack/barbican-api-c58b6b59d-hkfg5" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" Dec 09 11:47:20 crc kubenswrapper[4849]: I1209 11:47:20.562122 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:20 crc kubenswrapper[4849]: I1209 11:47:20.562773 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-c58b6b59d-hkfg5" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": dial tcp 10.217.0.145:9311: connect: connection refused" Dec 09 11:47:21 crc kubenswrapper[4849]: I1209 11:47:21.468009 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-c58b6b59d-hkfg5" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": dial tcp 10.217.0.145:9311: connect: connection refused" Dec 09 11:47:21 crc kubenswrapper[4849]: I1209 11:47:21.468343 4849 scope.go:117] "RemoveContainer" containerID="886f8f3b43b0a6c4e7a2d395075a1028f36ff6e46b41d8226c9d8345bd425562" Dec 09 11:47:21 crc kubenswrapper[4849]: E1209 11:47:21.468640 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=barbican-api pod=barbican-api-c58b6b59d-hkfg5_openstack(c2e91051-7b71-474b-a57d-c482d68f96b5)\"" pod="openstack/barbican-api-c58b6b59d-hkfg5" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" Dec 09 11:47:22 crc kubenswrapper[4849]: I1209 11:47:22.476122 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-c58b6b59d-hkfg5" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": dial tcp 10.217.0.145:9311: connect: connection refused" Dec 09 11:47:22 crc kubenswrapper[4849]: I1209 11:47:22.476224 4849 scope.go:117] "RemoveContainer" containerID="886f8f3b43b0a6c4e7a2d395075a1028f36ff6e46b41d8226c9d8345bd425562" Dec 09 11:47:22 crc kubenswrapper[4849]: E1209 11:47:22.477144 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=barbican-api pod=barbican-api-c58b6b59d-hkfg5_openstack(c2e91051-7b71-474b-a57d-c482d68f96b5)\"" pod="openstack/barbican-api-c58b6b59d-hkfg5" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" Dec 09 11:47:23 crc kubenswrapper[4849]: I1209 11:47:23.242628 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:23 crc kubenswrapper[4849]: I1209 11:47:23.316206 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-xqkc4"] Dec 09 11:47:23 crc kubenswrapper[4849]: I1209 11:47:23.316496 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" podUID="dd84db06-a743-4726-bd6e-e694e3a17011" containerName="dnsmasq-dns" containerID="cri-o://0140b7b1561ffc9cf03bea1e9837ef67a5adf98839444d775f781ada1a51d672" gracePeriod=10 Dec 09 11:47:23 crc kubenswrapper[4849]: I1209 11:47:23.505947 4849 generic.go:334] "Generic (PLEG): container finished" podID="dd84db06-a743-4726-bd6e-e694e3a17011" containerID="0140b7b1561ffc9cf03bea1e9837ef67a5adf98839444d775f781ada1a51d672" exitCode=0 Dec 09 11:47:23 crc kubenswrapper[4849]: I1209 11:47:23.506278 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" event={"ID":"dd84db06-a743-4726-bd6e-e694e3a17011","Type":"ContainerDied","Data":"0140b7b1561ffc9cf03bea1e9837ef67a5adf98839444d775f781ada1a51d672"} Dec 09 11:47:23 crc kubenswrapper[4849]: I1209 11:47:23.587610 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-c58b6b59d-hkfg5" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": dial tcp 10.217.0.145:9311: connect: connection refused" Dec 09 11:47:23 crc kubenswrapper[4849]: I1209 11:47:23.588092 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-c58b6b59d-hkfg5" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": dial tcp 10.217.0.145:9311: connect: connection refused" Dec 09 11:47:25 crc kubenswrapper[4849]: I1209 11:47:25.166218 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:25 crc kubenswrapper[4849]: I1209 11:47:25.225575 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7b9f4d9d4-l4khd" podUID="90d4aeb9-b28b-4315-9bbf-aab0e5247d9a" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.146:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 11:47:25 crc kubenswrapper[4849]: I1209 11:47:25.473773 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b9f4d9d4-l4khd" Dec 09 11:47:25 crc kubenswrapper[4849]: I1209 11:47:25.554197 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-c58b6b59d-hkfg5"] Dec 09 11:47:25 crc kubenswrapper[4849]: I1209 11:47:25.554425 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-c58b6b59d-hkfg5" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api-log" containerID="cri-o://c2c9ed36f83d936d97d64ab1b6a6b9f900a08bba66b497a699d7ee0881a3439d" gracePeriod=30 Dec 09 11:47:25 crc kubenswrapper[4849]: I1209 11:47:25.555239 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-c58b6b59d-hkfg5" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": dial tcp 10.217.0.145:9311: connect: connection refused" Dec 09 11:47:26 crc kubenswrapper[4849]: I1209 11:47:26.586237 4849 generic.go:334] "Generic (PLEG): container finished" podID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerID="c2c9ed36f83d936d97d64ab1b6a6b9f900a08bba66b497a699d7ee0881a3439d" exitCode=143 Dec 09 11:47:26 crc kubenswrapper[4849]: I1209 11:47:26.586465 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c58b6b59d-hkfg5" event={"ID":"c2e91051-7b71-474b-a57d-c482d68f96b5","Type":"ContainerDied","Data":"c2c9ed36f83d936d97d64ab1b6a6b9f900a08bba66b497a699d7ee0881a3439d"} Dec 09 11:47:26 crc kubenswrapper[4849]: I1209 11:47:26.667661 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" podUID="dd84db06-a743-4726-bd6e-e694e3a17011" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.285648 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.390856 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2e91051-7b71-474b-a57d-c482d68f96b5-logs\") pod \"c2e91051-7b71-474b-a57d-c482d68f96b5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.390939 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg9pc\" (UniqueName: \"kubernetes.io/projected/c2e91051-7b71-474b-a57d-c482d68f96b5-kube-api-access-wg9pc\") pod \"c2e91051-7b71-474b-a57d-c482d68f96b5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.391677 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2e91051-7b71-474b-a57d-c482d68f96b5-logs" (OuterVolumeSpecName: "logs") pod "c2e91051-7b71-474b-a57d-c482d68f96b5" (UID: "c2e91051-7b71-474b-a57d-c482d68f96b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.391030 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-config-data-custom\") pod \"c2e91051-7b71-474b-a57d-c482d68f96b5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.392088 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-config-data\") pod \"c2e91051-7b71-474b-a57d-c482d68f96b5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.392536 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-combined-ca-bundle\") pod \"c2e91051-7b71-474b-a57d-c482d68f96b5\" (UID: \"c2e91051-7b71-474b-a57d-c482d68f96b5\") " Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.393032 4849 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2e91051-7b71-474b-a57d-c482d68f96b5-logs\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.405326 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2e91051-7b71-474b-a57d-c482d68f96b5-kube-api-access-wg9pc" (OuterVolumeSpecName: "kube-api-access-wg9pc") pod "c2e91051-7b71-474b-a57d-c482d68f96b5" (UID: "c2e91051-7b71-474b-a57d-c482d68f96b5"). InnerVolumeSpecName "kube-api-access-wg9pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.427883 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c2e91051-7b71-474b-a57d-c482d68f96b5" (UID: "c2e91051-7b71-474b-a57d-c482d68f96b5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.485163 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.489507 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2e91051-7b71-474b-a57d-c482d68f96b5" (UID: "c2e91051-7b71-474b-a57d-c482d68f96b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.495047 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg9pc\" (UniqueName: \"kubernetes.io/projected/c2e91051-7b71-474b-a57d-c482d68f96b5-kube-api-access-wg9pc\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.495073 4849 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.495082 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.645990 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-config\") pod \"dd84db06-a743-4726-bd6e-e694e3a17011\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.646125 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-dns-svc\") pod \"dd84db06-a743-4726-bd6e-e694e3a17011\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.646187 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-ovsdbserver-nb\") pod \"dd84db06-a743-4726-bd6e-e694e3a17011\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.646251 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c76k9\" (UniqueName: \"kubernetes.io/projected/dd84db06-a743-4726-bd6e-e694e3a17011-kube-api-access-c76k9\") pod \"dd84db06-a743-4726-bd6e-e694e3a17011\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.646323 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-ovsdbserver-sb\") pod \"dd84db06-a743-4726-bd6e-e694e3a17011\" (UID: \"dd84db06-a743-4726-bd6e-e694e3a17011\") " Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.667320 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75d0bfb8-146b-4d21-8e81-f2cef3d99489","Type":"ContainerStarted","Data":"68e09ecef5bbf909e9b8e614e67ff59b33bef9fb24004bca2ace8a5917ad795d"} Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.667497 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="ceilometer-central-agent" containerID="cri-o://d737fa8f20ae3532085d512f58419e3598cd420f88a3ababdce2de078f5c00fa" gracePeriod=30 Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.667760 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.668039 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="proxy-httpd" containerID="cri-o://68e09ecef5bbf909e9b8e614e67ff59b33bef9fb24004bca2ace8a5917ad795d" gracePeriod=30 Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.668117 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="sg-core" containerID="cri-o://c8d08fbfc9bdd67b0ee3fc32004f074de1c0f231de61755e58dbe18cbfd14b26" gracePeriod=30 Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.668156 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="ceilometer-notification-agent" containerID="cri-o://fe801c71078b298ecdbcd0b422e346c62d4a765cf23f377f60d3dfccbfa16066" gracePeriod=30 Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.669863 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd84db06-a743-4726-bd6e-e694e3a17011-kube-api-access-c76k9" (OuterVolumeSpecName: "kube-api-access-c76k9") pod "dd84db06-a743-4726-bd6e-e694e3a17011" (UID: "dd84db06-a743-4726-bd6e-e694e3a17011"). InnerVolumeSpecName "kube-api-access-c76k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.670852 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c58b6b59d-hkfg5" event={"ID":"c2e91051-7b71-474b-a57d-c482d68f96b5","Type":"ContainerDied","Data":"96067efea12d4bb4421a01efd1c4b964370b4a24c2f328965dc1845a4ca46a46"} Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.670891 4849 scope.go:117] "RemoveContainer" containerID="886f8f3b43b0a6c4e7a2d395075a1028f36ff6e46b41d8226c9d8345bd425562" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.670955 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c58b6b59d-hkfg5" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.704176 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-69b864d45f-rvq8j" event={"ID":"54f7b258-dafd-4c17-85d7-457129b212de","Type":"ContainerStarted","Data":"5251999b409d2de45e9e1841d0108aa87f1c173666d4c912c4e3630e605aba5c"} Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.708551 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" event={"ID":"1b8073bc-e628-45c2-8d54-a455f73261af","Type":"ContainerStarted","Data":"edcbf985ed4f32d744a723c63ca05203e72cabf48709499b6ae63aa5186c8c1d"} Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.721677 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-config-data" (OuterVolumeSpecName: "config-data") pod "c2e91051-7b71-474b-a57d-c482d68f96b5" (UID: "c2e91051-7b71-474b-a57d-c482d68f96b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.748156 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" event={"ID":"dd84db06-a743-4726-bd6e-e694e3a17011","Type":"ContainerDied","Data":"4546f6c881a481584cec02ca37cf370ddc196566c0a44673fd1c2b9b61570f8c"} Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.748251 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-xqkc4" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.757299 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c76k9\" (UniqueName: \"kubernetes.io/projected/dd84db06-a743-4726-bd6e-e694e3a17011-kube-api-access-c76k9\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.757331 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2e91051-7b71-474b-a57d-c482d68f96b5-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.808326 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.369445028 podStartE2EDuration="56.808303904s" podCreationTimestamp="2025-12-09 11:46:31 +0000 UTC" firstStartedPulling="2025-12-09 11:46:32.552233743 +0000 UTC m=+1175.092118059" lastFinishedPulling="2025-12-09 11:47:26.991092619 +0000 UTC m=+1229.530976935" observedRunningTime="2025-12-09 11:47:27.72199976 +0000 UTC m=+1230.261884096" watchObservedRunningTime="2025-12-09 11:47:27.808303904 +0000 UTC m=+1230.348188220" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.813311 4849 scope.go:117] "RemoveContainer" containerID="c2c9ed36f83d936d97d64ab1b6a6b9f900a08bba66b497a699d7ee0881a3439d" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.815484 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dd84db06-a743-4726-bd6e-e694e3a17011" (UID: "dd84db06-a743-4726-bd6e-e694e3a17011"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.821339 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-config" (OuterVolumeSpecName: "config") pod "dd84db06-a743-4726-bd6e-e694e3a17011" (UID: "dd84db06-a743-4726-bd6e-e694e3a17011"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.829022 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-69b864d45f-rvq8j" podStartSLOduration=15.338639164 podStartE2EDuration="20.829002501s" podCreationTimestamp="2025-12-09 11:47:07 +0000 UTC" firstStartedPulling="2025-12-09 11:47:13.353200573 +0000 UTC m=+1215.893084889" lastFinishedPulling="2025-12-09 11:47:18.84356391 +0000 UTC m=+1221.383448226" observedRunningTime="2025-12-09 11:47:27.813845702 +0000 UTC m=+1230.353730018" watchObservedRunningTime="2025-12-09 11:47:27.829002501 +0000 UTC m=+1230.368886817" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.844011 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dd84db06-a743-4726-bd6e-e694e3a17011" (UID: "dd84db06-a743-4726-bd6e-e694e3a17011"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.848774 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-846cbdf45b-jlcvr" podStartSLOduration=14.574628106 podStartE2EDuration="20.848753114s" podCreationTimestamp="2025-12-09 11:47:07 +0000 UTC" firstStartedPulling="2025-12-09 11:47:12.567071534 +0000 UTC m=+1215.106955850" lastFinishedPulling="2025-12-09 11:47:18.841196542 +0000 UTC m=+1221.381080858" observedRunningTime="2025-12-09 11:47:27.846046116 +0000 UTC m=+1230.385930432" watchObservedRunningTime="2025-12-09 11:47:27.848753114 +0000 UTC m=+1230.388637430" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.862820 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dd84db06-a743-4726-bd6e-e694e3a17011" (UID: "dd84db06-a743-4726-bd6e-e694e3a17011"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.871969 4849 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.872002 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.872013 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.872022 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd84db06-a743-4726-bd6e-e694e3a17011-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.942571 4849 scope.go:117] "RemoveContainer" containerID="0140b7b1561ffc9cf03bea1e9837ef67a5adf98839444d775f781ada1a51d672" Dec 09 11:47:27 crc kubenswrapper[4849]: I1209 11:47:27.990372 4849 scope.go:117] "RemoveContainer" containerID="23d4539d903c668c14d8fccacc536191ef416392f7cfd579a2c82b9524117766" Dec 09 11:47:28 crc kubenswrapper[4849]: I1209 11:47:28.026490 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-c58b6b59d-hkfg5"] Dec 09 11:47:28 crc kubenswrapper[4849]: I1209 11:47:28.037230 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-c58b6b59d-hkfg5"] Dec 09 11:47:28 crc kubenswrapper[4849]: I1209 11:47:28.097845 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-xqkc4"] Dec 09 11:47:28 crc kubenswrapper[4849]: I1209 11:47:28.101914 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-xqkc4"] Dec 09 11:47:28 crc kubenswrapper[4849]: I1209 11:47:28.561442 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" path="/var/lib/kubelet/pods/c2e91051-7b71-474b-a57d-c482d68f96b5/volumes" Dec 09 11:47:28 crc kubenswrapper[4849]: I1209 11:47:28.562059 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd84db06-a743-4726-bd6e-e694e3a17011" path="/var/lib/kubelet/pods/dd84db06-a743-4726-bd6e-e694e3a17011/volumes" Dec 09 11:47:28 crc kubenswrapper[4849]: I1209 11:47:28.772650 4849 generic.go:334] "Generic (PLEG): container finished" podID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerID="c8d08fbfc9bdd67b0ee3fc32004f074de1c0f231de61755e58dbe18cbfd14b26" exitCode=2 Dec 09 11:47:28 crc kubenswrapper[4849]: I1209 11:47:28.772892 4849 generic.go:334] "Generic (PLEG): container finished" podID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerID="d737fa8f20ae3532085d512f58419e3598cd420f88a3ababdce2de078f5c00fa" exitCode=0 Dec 09 11:47:28 crc kubenswrapper[4849]: I1209 11:47:28.772928 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75d0bfb8-146b-4d21-8e81-f2cef3d99489","Type":"ContainerDied","Data":"c8d08fbfc9bdd67b0ee3fc32004f074de1c0f231de61755e58dbe18cbfd14b26"} Dec 09 11:47:28 crc kubenswrapper[4849]: I1209 11:47:28.772953 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75d0bfb8-146b-4d21-8e81-f2cef3d99489","Type":"ContainerDied","Data":"d737fa8f20ae3532085d512f58419e3598cd420f88a3ababdce2de078f5c00fa"} Dec 09 11:47:29 crc kubenswrapper[4849]: E1209 11:47:29.018486 4849 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8301f3_a405_47fc_b1a8_475daf544079.slice/crio-conmon-2e1be5b125c60b0aba9b126959aadd9e4d47ed2cd5d0da84ff0030d34c9afccc.scope\": RecentStats: unable to find data in memory cache]" Dec 09 11:47:29 crc kubenswrapper[4849]: I1209 11:47:29.405922 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:47:29 crc kubenswrapper[4849]: I1209 11:47:29.788847 4849 generic.go:334] "Generic (PLEG): container finished" podID="df8301f3-a405-47fc-b1a8-475daf544079" containerID="2e1be5b125c60b0aba9b126959aadd9e4d47ed2cd5d0da84ff0030d34c9afccc" exitCode=0 Dec 09 11:47:29 crc kubenswrapper[4849]: I1209 11:47:29.788976 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmgsr" event={"ID":"df8301f3-a405-47fc-b1a8-475daf544079","Type":"ContainerDied","Data":"2e1be5b125c60b0aba9b126959aadd9e4d47ed2cd5d0da84ff0030d34c9afccc"} Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.183248 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.338064 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvlfs\" (UniqueName: \"kubernetes.io/projected/df8301f3-a405-47fc-b1a8-475daf544079-kube-api-access-kvlfs\") pod \"df8301f3-a405-47fc-b1a8-475daf544079\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.338246 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df8301f3-a405-47fc-b1a8-475daf544079-etc-machine-id\") pod \"df8301f3-a405-47fc-b1a8-475daf544079\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.338371 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df8301f3-a405-47fc-b1a8-475daf544079-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "df8301f3-a405-47fc-b1a8-475daf544079" (UID: "df8301f3-a405-47fc-b1a8-475daf544079"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.338477 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-config-data\") pod \"df8301f3-a405-47fc-b1a8-475daf544079\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.339193 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-combined-ca-bundle\") pod \"df8301f3-a405-47fc-b1a8-475daf544079\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.339255 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-db-sync-config-data\") pod \"df8301f3-a405-47fc-b1a8-475daf544079\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.339311 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-scripts\") pod \"df8301f3-a405-47fc-b1a8-475daf544079\" (UID: \"df8301f3-a405-47fc-b1a8-475daf544079\") " Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.340203 4849 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df8301f3-a405-47fc-b1a8-475daf544079-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.350443 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "df8301f3-a405-47fc-b1a8-475daf544079" (UID: "df8301f3-a405-47fc-b1a8-475daf544079"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.350474 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df8301f3-a405-47fc-b1a8-475daf544079-kube-api-access-kvlfs" (OuterVolumeSpecName: "kube-api-access-kvlfs") pod "df8301f3-a405-47fc-b1a8-475daf544079" (UID: "df8301f3-a405-47fc-b1a8-475daf544079"). InnerVolumeSpecName "kube-api-access-kvlfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.350482 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-scripts" (OuterVolumeSpecName: "scripts") pod "df8301f3-a405-47fc-b1a8-475daf544079" (UID: "df8301f3-a405-47fc-b1a8-475daf544079"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.374004 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df8301f3-a405-47fc-b1a8-475daf544079" (UID: "df8301f3-a405-47fc-b1a8-475daf544079"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.392651 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-config-data" (OuterVolumeSpecName: "config-data") pod "df8301f3-a405-47fc-b1a8-475daf544079" (UID: "df8301f3-a405-47fc-b1a8-475daf544079"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.442356 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.442402 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.442453 4849 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.442463 4849 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df8301f3-a405-47fc-b1a8-475daf544079-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.442473 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvlfs\" (UniqueName: \"kubernetes.io/projected/df8301f3-a405-47fc-b1a8-475daf544079-kube-api-access-kvlfs\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.809286 4849 generic.go:334] "Generic (PLEG): container finished" podID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerID="fe801c71078b298ecdbcd0b422e346c62d4a765cf23f377f60d3dfccbfa16066" exitCode=0 Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.809464 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75d0bfb8-146b-4d21-8e81-f2cef3d99489","Type":"ContainerDied","Data":"fe801c71078b298ecdbcd0b422e346c62d4a765cf23f377f60d3dfccbfa16066"} Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.811545 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmgsr" event={"ID":"df8301f3-a405-47fc-b1a8-475daf544079","Type":"ContainerDied","Data":"231b9d52b3f20de67a8e5ac232628753127960037b884d9aae8643abb9742c42"} Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.811579 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="231b9d52b3f20de67a8e5ac232628753127960037b884d9aae8643abb9742c42" Dec 09 11:47:31 crc kubenswrapper[4849]: I1209 11:47:31.811639 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmgsr" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.194404 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 11:47:32 crc kubenswrapper[4849]: E1209 11:47:32.195073 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api-log" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.195103 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api-log" Dec 09 11:47:32 crc kubenswrapper[4849]: E1209 11:47:32.195123 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd84db06-a743-4726-bd6e-e694e3a17011" containerName="init" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.195133 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd84db06-a743-4726-bd6e-e694e3a17011" containerName="init" Dec 09 11:47:32 crc kubenswrapper[4849]: E1209 11:47:32.195151 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8301f3-a405-47fc-b1a8-475daf544079" containerName="cinder-db-sync" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.195160 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8301f3-a405-47fc-b1a8-475daf544079" containerName="cinder-db-sync" Dec 09 11:47:32 crc kubenswrapper[4849]: E1209 11:47:32.195171 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.195179 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api" Dec 09 11:47:32 crc kubenswrapper[4849]: E1209 11:47:32.195194 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd84db06-a743-4726-bd6e-e694e3a17011" containerName="dnsmasq-dns" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.195201 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd84db06-a743-4726-bd6e-e694e3a17011" containerName="dnsmasq-dns" Dec 09 11:47:32 crc kubenswrapper[4849]: E1209 11:47:32.195218 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.195228 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.195423 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd84db06-a743-4726-bd6e-e694e3a17011" containerName="dnsmasq-dns" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.195458 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.195473 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="df8301f3-a405-47fc-b1a8-475daf544079" containerName="cinder-db-sync" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.195488 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.195500 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e91051-7b71-474b-a57d-c482d68f96b5" containerName="barbican-api-log" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.196668 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.206610 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.206971 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.207109 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2rmx9" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.207262 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.207398 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.262079 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fk2w\" (UniqueName: \"kubernetes.io/projected/04cc0062-bd46-48a2-b761-f0c8e377cace-kube-api-access-8fk2w\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.262123 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-config-data\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.262170 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.262184 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.262218 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04cc0062-bd46-48a2-b761-f0c8e377cace-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.262247 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-scripts\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.305631 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-fsdmf"] Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.307516 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.318560 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-fsdmf"] Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.378864 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fk2w\" (UniqueName: \"kubernetes.io/projected/04cc0062-bd46-48a2-b761-f0c8e377cace-kube-api-access-8fk2w\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.378933 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-config-data\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.379009 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.379034 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.379087 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04cc0062-bd46-48a2-b761-f0c8e377cace-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.387942 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-scripts\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.398370 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04cc0062-bd46-48a2-b761-f0c8e377cace-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.435737 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.444856 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fk2w\" (UniqueName: \"kubernetes.io/projected/04cc0062-bd46-48a2-b761-f0c8e377cace-kube-api-access-8fk2w\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.453895 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.489182 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-scripts\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.490957 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-config\") pod \"dnsmasq-dns-6d97fcdd8f-fsdmf\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.490999 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-fsdmf\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.491037 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x5zz\" (UniqueName: \"kubernetes.io/projected/7a5d6732-8e11-475a-a7ea-b5d1588e5770-kube-api-access-5x5zz\") pod \"dnsmasq-dns-6d97fcdd8f-fsdmf\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.491089 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-fsdmf\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.491122 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-fsdmf\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.513998 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-config-data\") pod \"cinder-scheduler-0\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.541102 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.594164 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-config\") pod \"dnsmasq-dns-6d97fcdd8f-fsdmf\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.594579 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-fsdmf\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.594624 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x5zz\" (UniqueName: \"kubernetes.io/projected/7a5d6732-8e11-475a-a7ea-b5d1588e5770-kube-api-access-5x5zz\") pod \"dnsmasq-dns-6d97fcdd8f-fsdmf\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.594693 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-fsdmf\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.594734 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-fsdmf\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.595564 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-fsdmf\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.596288 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-fsdmf\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.596867 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-config\") pod \"dnsmasq-dns-6d97fcdd8f-fsdmf\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.597243 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-fsdmf\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.647070 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x5zz\" (UniqueName: \"kubernetes.io/projected/7a5d6732-8e11-475a-a7ea-b5d1588e5770-kube-api-access-5x5zz\") pod \"dnsmasq-dns-6d97fcdd8f-fsdmf\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.748317 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-c689fb97-j4mnm" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.774217 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.790160 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.807192 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.838137 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.895426 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dbfd748b8-p8g49"] Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.897630 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-dbfd748b8-p8g49" podUID="bfa8b5b6-c9f2-40c6-8e55-b465168d380a" containerName="neutron-httpd" containerID="cri-o://5d7bf4cd0eaf9819a10fd0db73202046ef04dd4669d2f49149617c7af456ce31" gracePeriod=30 Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.898642 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-dbfd748b8-p8g49" podUID="bfa8b5b6-c9f2-40c6-8e55-b465168d380a" containerName="neutron-api" containerID="cri-o://7d40e523916818e539db57bb074c11b3f102c0113b4871044d570fe9ee49d5e2" gracePeriod=30 Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.905469 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.905543 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-config-data\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.905626 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-config-data-custom\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.905822 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh24s\" (UniqueName: \"kubernetes.io/projected/8acf922a-cec2-429a-b26a-66de573fa0f6-kube-api-access-wh24s\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.906142 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-scripts\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.906935 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8acf922a-cec2-429a-b26a-66de573fa0f6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.907007 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8acf922a-cec2-429a-b26a-66de573fa0f6-logs\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:32 crc kubenswrapper[4849]: I1209 11:47:32.936352 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.010181 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-config-data-custom\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.017223 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh24s\" (UniqueName: \"kubernetes.io/projected/8acf922a-cec2-429a-b26a-66de573fa0f6-kube-api-access-wh24s\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.017443 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-scripts\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.017604 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8acf922a-cec2-429a-b26a-66de573fa0f6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.017632 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8acf922a-cec2-429a-b26a-66de573fa0f6-logs\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.017820 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.017897 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-config-data\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.019799 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8acf922a-cec2-429a-b26a-66de573fa0f6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.023180 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8acf922a-cec2-429a-b26a-66de573fa0f6-logs\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.034078 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.035244 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-scripts\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.036379 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-config-data\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.044414 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-config-data-custom\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.081938 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh24s\" (UniqueName: \"kubernetes.io/projected/8acf922a-cec2-429a-b26a-66de573fa0f6-kube-api-access-wh24s\") pod \"cinder-api-0\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " pod="openstack/cinder-api-0" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.125010 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.520141 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.776785 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-fsdmf"] Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.844211 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" event={"ID":"7a5d6732-8e11-475a-a7ea-b5d1588e5770","Type":"ContainerStarted","Data":"0c69db71d5dadcad4a9cbccc7b9b882a66e5337283983aa2cd15203d3aa70383"} Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.845810 4849 generic.go:334] "Generic (PLEG): container finished" podID="bfa8b5b6-c9f2-40c6-8e55-b465168d380a" containerID="5d7bf4cd0eaf9819a10fd0db73202046ef04dd4669d2f49149617c7af456ce31" exitCode=0 Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.845900 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbfd748b8-p8g49" event={"ID":"bfa8b5b6-c9f2-40c6-8e55-b465168d380a","Type":"ContainerDied","Data":"5d7bf4cd0eaf9819a10fd0db73202046ef04dd4669d2f49149617c7af456ce31"} Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.846620 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04cc0062-bd46-48a2-b761-f0c8e377cace","Type":"ContainerStarted","Data":"61c392c0c02eaee55e8829b4c2ffb52af368ecda7597e1d05c6036b2c08419e5"} Dec 09 11:47:33 crc kubenswrapper[4849]: I1209 11:47:33.904591 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 11:47:34 crc kubenswrapper[4849]: I1209 11:47:34.864131 4849 generic.go:334] "Generic (PLEG): container finished" podID="7a5d6732-8e11-475a-a7ea-b5d1588e5770" containerID="a58ab147c8c30cc722aed8b5c896295949b3a789433b74c5848c0725f1916b5c" exitCode=0 Dec 09 11:47:34 crc kubenswrapper[4849]: I1209 11:47:34.864475 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" event={"ID":"7a5d6732-8e11-475a-a7ea-b5d1588e5770","Type":"ContainerDied","Data":"a58ab147c8c30cc722aed8b5c896295949b3a789433b74c5848c0725f1916b5c"} Dec 09 11:47:34 crc kubenswrapper[4849]: I1209 11:47:34.886280 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8acf922a-cec2-429a-b26a-66de573fa0f6","Type":"ContainerStarted","Data":"66e694c6866409692b28e5fc5ad05cf083af50aa1b8ad2e3c27f1daee377708b"} Dec 09 11:47:35 crc kubenswrapper[4849]: I1209 11:47:35.071104 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:35 crc kubenswrapper[4849]: I1209 11:47:35.073064 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-77d49689b4-4nl2p" Dec 09 11:47:35 crc kubenswrapper[4849]: I1209 11:47:35.695235 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 09 11:47:35 crc kubenswrapper[4849]: I1209 11:47:35.896196 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04cc0062-bd46-48a2-b761-f0c8e377cace","Type":"ContainerStarted","Data":"34fc4578940807cbb39415fdd6e679b60e9fcd6e7802674b291a1a2a9a511536"} Dec 09 11:47:35 crc kubenswrapper[4849]: I1209 11:47:35.902524 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" event={"ID":"7a5d6732-8e11-475a-a7ea-b5d1588e5770","Type":"ContainerStarted","Data":"acc0df9a8d7c73e96c591a3f7c327ebca1b724a8c1a017b82d8c2090a1da80f9"} Dec 09 11:47:35 crc kubenswrapper[4849]: I1209 11:47:35.902619 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:35 crc kubenswrapper[4849]: I1209 11:47:35.912096 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8acf922a-cec2-429a-b26a-66de573fa0f6","Type":"ContainerStarted","Data":"13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5"} Dec 09 11:47:35 crc kubenswrapper[4849]: I1209 11:47:35.931758 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" podStartSLOduration=3.931739313 podStartE2EDuration="3.931739313s" podCreationTimestamp="2025-12-09 11:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:47:35.930620184 +0000 UTC m=+1238.470504520" watchObservedRunningTime="2025-12-09 11:47:35.931739313 +0000 UTC m=+1238.471623639" Dec 09 11:47:36 crc kubenswrapper[4849]: I1209 11:47:36.922770 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04cc0062-bd46-48a2-b761-f0c8e377cace","Type":"ContainerStarted","Data":"f59d615456551050541bd4ef0df0f079b19906960161c9dd4417eb68e6b51574"} Dec 09 11:47:36 crc kubenswrapper[4849]: I1209 11:47:36.924774 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8acf922a-cec2-429a-b26a-66de573fa0f6","Type":"ContainerStarted","Data":"1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1"} Dec 09 11:47:36 crc kubenswrapper[4849]: I1209 11:47:36.925010 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8acf922a-cec2-429a-b26a-66de573fa0f6" containerName="cinder-api-log" containerID="cri-o://13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5" gracePeriod=30 Dec 09 11:47:36 crc kubenswrapper[4849]: I1209 11:47:36.925077 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8acf922a-cec2-429a-b26a-66de573fa0f6" containerName="cinder-api" containerID="cri-o://1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1" gracePeriod=30 Dec 09 11:47:36 crc kubenswrapper[4849]: I1209 11:47:36.954184 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.16488592 podStartE2EDuration="4.954161258s" podCreationTimestamp="2025-12-09 11:47:32 +0000 UTC" firstStartedPulling="2025-12-09 11:47:33.530727032 +0000 UTC m=+1236.070611348" lastFinishedPulling="2025-12-09 11:47:34.32000237 +0000 UTC m=+1236.859886686" observedRunningTime="2025-12-09 11:47:36.950147567 +0000 UTC m=+1239.490031883" watchObservedRunningTime="2025-12-09 11:47:36.954161258 +0000 UTC m=+1239.494045574" Dec 09 11:47:36 crc kubenswrapper[4849]: I1209 11:47:36.976301 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.9762765 podStartE2EDuration="4.9762765s" podCreationTimestamp="2025-12-09 11:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:47:36.974809893 +0000 UTC m=+1239.514694209" watchObservedRunningTime="2025-12-09 11:47:36.9762765 +0000 UTC m=+1239.516160826" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.541520 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.654339 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.778508 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh24s\" (UniqueName: \"kubernetes.io/projected/8acf922a-cec2-429a-b26a-66de573fa0f6-kube-api-access-wh24s\") pod \"8acf922a-cec2-429a-b26a-66de573fa0f6\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.778549 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-combined-ca-bundle\") pod \"8acf922a-cec2-429a-b26a-66de573fa0f6\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.778718 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-config-data\") pod \"8acf922a-cec2-429a-b26a-66de573fa0f6\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.778750 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8acf922a-cec2-429a-b26a-66de573fa0f6-etc-machine-id\") pod \"8acf922a-cec2-429a-b26a-66de573fa0f6\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.778776 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-config-data-custom\") pod \"8acf922a-cec2-429a-b26a-66de573fa0f6\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.778798 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-scripts\") pod \"8acf922a-cec2-429a-b26a-66de573fa0f6\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.778844 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8acf922a-cec2-429a-b26a-66de573fa0f6-logs\") pod \"8acf922a-cec2-429a-b26a-66de573fa0f6\" (UID: \"8acf922a-cec2-429a-b26a-66de573fa0f6\") " Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.779529 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8acf922a-cec2-429a-b26a-66de573fa0f6-logs" (OuterVolumeSpecName: "logs") pod "8acf922a-cec2-429a-b26a-66de573fa0f6" (UID: "8acf922a-cec2-429a-b26a-66de573fa0f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.779988 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8acf922a-cec2-429a-b26a-66de573fa0f6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8acf922a-cec2-429a-b26a-66de573fa0f6" (UID: "8acf922a-cec2-429a-b26a-66de573fa0f6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.784879 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8acf922a-cec2-429a-b26a-66de573fa0f6-kube-api-access-wh24s" (OuterVolumeSpecName: "kube-api-access-wh24s") pod "8acf922a-cec2-429a-b26a-66de573fa0f6" (UID: "8acf922a-cec2-429a-b26a-66de573fa0f6"). InnerVolumeSpecName "kube-api-access-wh24s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.785572 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-scripts" (OuterVolumeSpecName: "scripts") pod "8acf922a-cec2-429a-b26a-66de573fa0f6" (UID: "8acf922a-cec2-429a-b26a-66de573fa0f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.789526 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8acf922a-cec2-429a-b26a-66de573fa0f6" (UID: "8acf922a-cec2-429a-b26a-66de573fa0f6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.821160 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8acf922a-cec2-429a-b26a-66de573fa0f6" (UID: "8acf922a-cec2-429a-b26a-66de573fa0f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.842167 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-config-data" (OuterVolumeSpecName: "config-data") pod "8acf922a-cec2-429a-b26a-66de573fa0f6" (UID: "8acf922a-cec2-429a-b26a-66de573fa0f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.881564 4849 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8acf922a-cec2-429a-b26a-66de573fa0f6-logs\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.881597 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh24s\" (UniqueName: \"kubernetes.io/projected/8acf922a-cec2-429a-b26a-66de573fa0f6-kube-api-access-wh24s\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.881607 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.881615 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.881626 4849 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8acf922a-cec2-429a-b26a-66de573fa0f6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.881633 4849 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.881640 4849 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8acf922a-cec2-429a-b26a-66de573fa0f6-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.933672 4849 generic.go:334] "Generic (PLEG): container finished" podID="8acf922a-cec2-429a-b26a-66de573fa0f6" containerID="1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1" exitCode=0 Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.933708 4849 generic.go:334] "Generic (PLEG): container finished" podID="8acf922a-cec2-429a-b26a-66de573fa0f6" containerID="13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5" exitCode=143 Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.934768 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.937500 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8acf922a-cec2-429a-b26a-66de573fa0f6","Type":"ContainerDied","Data":"1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1"} Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.937541 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8acf922a-cec2-429a-b26a-66de573fa0f6","Type":"ContainerDied","Data":"13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5"} Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.937552 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8acf922a-cec2-429a-b26a-66de573fa0f6","Type":"ContainerDied","Data":"66e694c6866409692b28e5fc5ad05cf083af50aa1b8ad2e3c27f1daee377708b"} Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.937580 4849 scope.go:117] "RemoveContainer" containerID="1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.978795 4849 scope.go:117] "RemoveContainer" containerID="13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5" Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.983158 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 09 11:47:37 crc kubenswrapper[4849]: I1209 11:47:37.994847 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.006651 4849 scope.go:117] "RemoveContainer" containerID="1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.009529 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 09 11:47:38 crc kubenswrapper[4849]: E1209 11:47:38.009981 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8acf922a-cec2-429a-b26a-66de573fa0f6" containerName="cinder-api-log" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.010007 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="8acf922a-cec2-429a-b26a-66de573fa0f6" containerName="cinder-api-log" Dec 09 11:47:38 crc kubenswrapper[4849]: E1209 11:47:38.010047 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8acf922a-cec2-429a-b26a-66de573fa0f6" containerName="cinder-api" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.010056 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="8acf922a-cec2-429a-b26a-66de573fa0f6" containerName="cinder-api" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.010266 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="8acf922a-cec2-429a-b26a-66de573fa0f6" containerName="cinder-api" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.010299 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="8acf922a-cec2-429a-b26a-66de573fa0f6" containerName="cinder-api-log" Dec 09 11:47:38 crc kubenswrapper[4849]: E1209 11:47:38.009539 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1\": container with ID starting with 1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1 not found: ID does not exist" containerID="1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.010481 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1"} err="failed to get container status \"1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1\": rpc error: code = NotFound desc = could not find container \"1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1\": container with ID starting with 1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1 not found: ID does not exist" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.010528 4849 scope.go:117] "RemoveContainer" containerID="13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.012378 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: E1209 11:47:38.013466 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5\": container with ID starting with 13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5 not found: ID does not exist" containerID="13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.013505 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5"} err="failed to get container status \"13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5\": rpc error: code = NotFound desc = could not find container \"13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5\": container with ID starting with 13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5 not found: ID does not exist" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.013531 4849 scope.go:117] "RemoveContainer" containerID="1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.013742 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1"} err="failed to get container status \"1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1\": rpc error: code = NotFound desc = could not find container \"1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1\": container with ID starting with 1bb50ff6e8a0ea8e1a9fcf2d0c5d81fb3f9794b27f7818965fa1134cf37f5ce1 not found: ID does not exist" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.013769 4849 scope.go:117] "RemoveContainer" containerID="13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.013943 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5"} err="failed to get container status \"13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5\": rpc error: code = NotFound desc = could not find container \"13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5\": container with ID starting with 13adac77af137e250e44a199cd8d2feef82367f7705219e822a6f25953da1af5 not found: ID does not exist" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.018916 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.018936 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.019071 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.071517 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.188841 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-logs\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.188894 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.188963 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.189036 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.189058 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-scripts\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.189074 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.189096 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klbkj\" (UniqueName: \"kubernetes.io/projected/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-kube-api-access-klbkj\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.189116 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-config-data-custom\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.189145 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-config-data\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.290199 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-scripts\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.290247 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.290269 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klbkj\" (UniqueName: \"kubernetes.io/projected/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-kube-api-access-klbkj\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.290299 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-config-data-custom\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.290333 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-config-data\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.290383 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-logs\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.290409 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.290491 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.290552 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.291541 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-logs\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.291607 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.296747 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-config-data\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.297072 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.300197 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-config-data-custom\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.300931 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-scripts\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.303581 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.305237 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.313955 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klbkj\" (UniqueName: \"kubernetes.io/projected/4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e-kube-api-access-klbkj\") pod \"cinder-api-0\" (UID: \"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e\") " pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.333428 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 11:47:38 crc kubenswrapper[4849]: I1209 11:47:38.580840 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8acf922a-cec2-429a-b26a-66de573fa0f6" path="/var/lib/kubelet/pods/8acf922a-cec2-429a-b26a-66de573fa0f6/volumes" Dec 09 11:47:39 crc kubenswrapper[4849]: I1209 11:47:39.000403 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 11:47:39 crc kubenswrapper[4849]: I1209 11:47:39.962187 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e","Type":"ContainerStarted","Data":"a6762e9b288fdfee81427fc114b8df1d8457f7078e2a860afa61958ca96303dc"} Dec 09 11:47:39 crc kubenswrapper[4849]: I1209 11:47:39.962584 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e","Type":"ContainerStarted","Data":"d111ee3c71da817221071501e50ee8c832e594c487e9708141630b2be1f6f923"} Dec 09 11:47:40 crc kubenswrapper[4849]: I1209 11:47:40.975189 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e","Type":"ContainerStarted","Data":"23b0a4d2e5fccf6acd1820840fe9de5428a2fc91a5bbf9f5d8c8e200bd838eb3"} Dec 09 11:47:40 crc kubenswrapper[4849]: I1209 11:47:40.975731 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 09 11:47:40 crc kubenswrapper[4849]: I1209 11:47:40.999061 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.999035352 podStartE2EDuration="3.999035352s" podCreationTimestamp="2025-12-09 11:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:47:40.996943879 +0000 UTC m=+1243.536828225" watchObservedRunningTime="2025-12-09 11:47:40.999035352 +0000 UTC m=+1243.538919678" Dec 09 11:47:42 crc kubenswrapper[4849]: I1209 11:47:42.939005 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.013178 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-wjf4f"] Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.013473 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" podUID="e42c2434-f83d-4531-9f19-073674ff63dd" containerName="dnsmasq-dns" containerID="cri-o://8047e1839aca05bd7942932df7303f95e51b37365cc63cd33340942c924a207a" gracePeriod=10 Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.101059 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.217350 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.241793 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" podUID="e42c2434-f83d-4531-9f19-073674ff63dd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.673326 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.744683 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-config\") pod \"e42c2434-f83d-4531-9f19-073674ff63dd\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.744757 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-dns-svc\") pod \"e42c2434-f83d-4531-9f19-073674ff63dd\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.744788 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2pjt\" (UniqueName: \"kubernetes.io/projected/e42c2434-f83d-4531-9f19-073674ff63dd-kube-api-access-w2pjt\") pod \"e42c2434-f83d-4531-9f19-073674ff63dd\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.744938 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-ovsdbserver-nb\") pod \"e42c2434-f83d-4531-9f19-073674ff63dd\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.745115 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-ovsdbserver-sb\") pod \"e42c2434-f83d-4531-9f19-073674ff63dd\" (UID: \"e42c2434-f83d-4531-9f19-073674ff63dd\") " Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.775770 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e42c2434-f83d-4531-9f19-073674ff63dd-kube-api-access-w2pjt" (OuterVolumeSpecName: "kube-api-access-w2pjt") pod "e42c2434-f83d-4531-9f19-073674ff63dd" (UID: "e42c2434-f83d-4531-9f19-073674ff63dd"). InnerVolumeSpecName "kube-api-access-w2pjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.778365 4849 generic.go:334] "Generic (PLEG): container finished" podID="e42c2434-f83d-4531-9f19-073674ff63dd" containerID="8047e1839aca05bd7942932df7303f95e51b37365cc63cd33340942c924a207a" exitCode=0 Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.778671 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="04cc0062-bd46-48a2-b761-f0c8e377cace" containerName="cinder-scheduler" containerID="cri-o://34fc4578940807cbb39415fdd6e679b60e9fcd6e7802674b291a1a2a9a511536" gracePeriod=30 Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.779008 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.782210 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" event={"ID":"e42c2434-f83d-4531-9f19-073674ff63dd","Type":"ContainerDied","Data":"8047e1839aca05bd7942932df7303f95e51b37365cc63cd33340942c924a207a"} Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.785971 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-wjf4f" event={"ID":"e42c2434-f83d-4531-9f19-073674ff63dd","Type":"ContainerDied","Data":"bb93c2ff5a6e0a9a6c731fe9c8cd0673707c69571d6b9b5146b72626375c5db7"} Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.788281 4849 scope.go:117] "RemoveContainer" containerID="8047e1839aca05bd7942932df7303f95e51b37365cc63cd33340942c924a207a" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.782963 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="04cc0062-bd46-48a2-b761-f0c8e377cace" containerName="probe" containerID="cri-o://f59d615456551050541bd4ef0df0f079b19906960161c9dd4417eb68e6b51574" gracePeriod=30 Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.836439 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-config" (OuterVolumeSpecName: "config") pod "e42c2434-f83d-4531-9f19-073674ff63dd" (UID: "e42c2434-f83d-4531-9f19-073674ff63dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.845976 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e42c2434-f83d-4531-9f19-073674ff63dd" (UID: "e42c2434-f83d-4531-9f19-073674ff63dd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.847804 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.848027 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.848099 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2pjt\" (UniqueName: \"kubernetes.io/projected/e42c2434-f83d-4531-9f19-073674ff63dd-kube-api-access-w2pjt\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.849786 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e42c2434-f83d-4531-9f19-073674ff63dd" (UID: "e42c2434-f83d-4531-9f19-073674ff63dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.860497 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e42c2434-f83d-4531-9f19-073674ff63dd" (UID: "e42c2434-f83d-4531-9f19-073674ff63dd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.878789 4849 scope.go:117] "RemoveContainer" containerID="657c304153c50fb2de36f24c8ca2f9984221c4ac30b663e230b25d9408d958d7" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.915934 4849 scope.go:117] "RemoveContainer" containerID="8047e1839aca05bd7942932df7303f95e51b37365cc63cd33340942c924a207a" Dec 09 11:47:43 crc kubenswrapper[4849]: E1209 11:47:43.917987 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8047e1839aca05bd7942932df7303f95e51b37365cc63cd33340942c924a207a\": container with ID starting with 8047e1839aca05bd7942932df7303f95e51b37365cc63cd33340942c924a207a not found: ID does not exist" containerID="8047e1839aca05bd7942932df7303f95e51b37365cc63cd33340942c924a207a" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.918049 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8047e1839aca05bd7942932df7303f95e51b37365cc63cd33340942c924a207a"} err="failed to get container status \"8047e1839aca05bd7942932df7303f95e51b37365cc63cd33340942c924a207a\": rpc error: code = NotFound desc = could not find container \"8047e1839aca05bd7942932df7303f95e51b37365cc63cd33340942c924a207a\": container with ID starting with 8047e1839aca05bd7942932df7303f95e51b37365cc63cd33340942c924a207a not found: ID does not exist" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.918086 4849 scope.go:117] "RemoveContainer" containerID="657c304153c50fb2de36f24c8ca2f9984221c4ac30b663e230b25d9408d958d7" Dec 09 11:47:43 crc kubenswrapper[4849]: E1209 11:47:43.918997 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"657c304153c50fb2de36f24c8ca2f9984221c4ac30b663e230b25d9408d958d7\": container with ID starting with 657c304153c50fb2de36f24c8ca2f9984221c4ac30b663e230b25d9408d958d7 not found: ID does not exist" containerID="657c304153c50fb2de36f24c8ca2f9984221c4ac30b663e230b25d9408d958d7" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.919031 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"657c304153c50fb2de36f24c8ca2f9984221c4ac30b663e230b25d9408d958d7"} err="failed to get container status \"657c304153c50fb2de36f24c8ca2f9984221c4ac30b663e230b25d9408d958d7\": rpc error: code = NotFound desc = could not find container \"657c304153c50fb2de36f24c8ca2f9984221c4ac30b663e230b25d9408d958d7\": container with ID starting with 657c304153c50fb2de36f24c8ca2f9984221c4ac30b663e230b25d9408d958d7 not found: ID does not exist" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.949983 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:43 crc kubenswrapper[4849]: I1209 11:47:43.950023 4849 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e42c2434-f83d-4531-9f19-073674ff63dd-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.157244 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-wjf4f"] Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.168550 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-wjf4f"] Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.348979 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.465155 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-config\") pod \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.465607 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-ovndb-tls-certs\") pod \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.465758 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-httpd-config\") pod \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.465930 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-combined-ca-bundle\") pod \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.466051 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnq4j\" (UniqueName: \"kubernetes.io/projected/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-kube-api-access-qnq4j\") pod \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\" (UID: \"bfa8b5b6-c9f2-40c6-8e55-b465168d380a\") " Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.474645 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "bfa8b5b6-c9f2-40c6-8e55-b465168d380a" (UID: "bfa8b5b6-c9f2-40c6-8e55-b465168d380a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.474804 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-kube-api-access-qnq4j" (OuterVolumeSpecName: "kube-api-access-qnq4j") pod "bfa8b5b6-c9f2-40c6-8e55-b465168d380a" (UID: "bfa8b5b6-c9f2-40c6-8e55-b465168d380a"). InnerVolumeSpecName "kube-api-access-qnq4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.521905 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-config" (OuterVolumeSpecName: "config") pod "bfa8b5b6-c9f2-40c6-8e55-b465168d380a" (UID: "bfa8b5b6-c9f2-40c6-8e55-b465168d380a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.552638 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfa8b5b6-c9f2-40c6-8e55-b465168d380a" (UID: "bfa8b5b6-c9f2-40c6-8e55-b465168d380a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.560355 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e42c2434-f83d-4531-9f19-073674ff63dd" path="/var/lib/kubelet/pods/e42c2434-f83d-4531-9f19-073674ff63dd/volumes" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.570897 4849 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.572089 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.572105 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnq4j\" (UniqueName: \"kubernetes.io/projected/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-kube-api-access-qnq4j\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.572116 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.604803 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "bfa8b5b6-c9f2-40c6-8e55-b465168d380a" (UID: "bfa8b5b6-c9f2-40c6-8e55-b465168d380a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.673931 4849 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8b5b6-c9f2-40c6-8e55-b465168d380a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.788368 4849 generic.go:334] "Generic (PLEG): container finished" podID="04cc0062-bd46-48a2-b761-f0c8e377cace" containerID="f59d615456551050541bd4ef0df0f079b19906960161c9dd4417eb68e6b51574" exitCode=0 Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.788802 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04cc0062-bd46-48a2-b761-f0c8e377cace","Type":"ContainerDied","Data":"f59d615456551050541bd4ef0df0f079b19906960161c9dd4417eb68e6b51574"} Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.790964 4849 generic.go:334] "Generic (PLEG): container finished" podID="bfa8b5b6-c9f2-40c6-8e55-b465168d380a" containerID="7d40e523916818e539db57bb074c11b3f102c0113b4871044d570fe9ee49d5e2" exitCode=0 Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.791002 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbfd748b8-p8g49" event={"ID":"bfa8b5b6-c9f2-40c6-8e55-b465168d380a","Type":"ContainerDied","Data":"7d40e523916818e539db57bb074c11b3f102c0113b4871044d570fe9ee49d5e2"} Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.791041 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbfd748b8-p8g49" event={"ID":"bfa8b5b6-c9f2-40c6-8e55-b465168d380a","Type":"ContainerDied","Data":"d07cbfd9e858b1a55bd73cc285c84bbf4256f7d40706d975ac544c5f8736b69a"} Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.791054 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbfd748b8-p8g49" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.791062 4849 scope.go:117] "RemoveContainer" containerID="5d7bf4cd0eaf9819a10fd0db73202046ef04dd4669d2f49149617c7af456ce31" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.818265 4849 scope.go:117] "RemoveContainer" containerID="7d40e523916818e539db57bb074c11b3f102c0113b4871044d570fe9ee49d5e2" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.820572 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dbfd748b8-p8g49"] Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.832594 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-dbfd748b8-p8g49"] Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.844323 4849 scope.go:117] "RemoveContainer" containerID="5d7bf4cd0eaf9819a10fd0db73202046ef04dd4669d2f49149617c7af456ce31" Dec 09 11:47:44 crc kubenswrapper[4849]: E1209 11:47:44.845276 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d7bf4cd0eaf9819a10fd0db73202046ef04dd4669d2f49149617c7af456ce31\": container with ID starting with 5d7bf4cd0eaf9819a10fd0db73202046ef04dd4669d2f49149617c7af456ce31 not found: ID does not exist" containerID="5d7bf4cd0eaf9819a10fd0db73202046ef04dd4669d2f49149617c7af456ce31" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.845315 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d7bf4cd0eaf9819a10fd0db73202046ef04dd4669d2f49149617c7af456ce31"} err="failed to get container status \"5d7bf4cd0eaf9819a10fd0db73202046ef04dd4669d2f49149617c7af456ce31\": rpc error: code = NotFound desc = could not find container \"5d7bf4cd0eaf9819a10fd0db73202046ef04dd4669d2f49149617c7af456ce31\": container with ID starting with 5d7bf4cd0eaf9819a10fd0db73202046ef04dd4669d2f49149617c7af456ce31 not found: ID does not exist" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.845348 4849 scope.go:117] "RemoveContainer" containerID="7d40e523916818e539db57bb074c11b3f102c0113b4871044d570fe9ee49d5e2" Dec 09 11:47:44 crc kubenswrapper[4849]: E1209 11:47:44.845610 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d40e523916818e539db57bb074c11b3f102c0113b4871044d570fe9ee49d5e2\": container with ID starting with 7d40e523916818e539db57bb074c11b3f102c0113b4871044d570fe9ee49d5e2 not found: ID does not exist" containerID="7d40e523916818e539db57bb074c11b3f102c0113b4871044d570fe9ee49d5e2" Dec 09 11:47:44 crc kubenswrapper[4849]: I1209 11:47:44.845664 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d40e523916818e539db57bb074c11b3f102c0113b4871044d570fe9ee49d5e2"} err="failed to get container status \"7d40e523916818e539db57bb074c11b3f102c0113b4871044d570fe9ee49d5e2\": rpc error: code = NotFound desc = could not find container \"7d40e523916818e539db57bb074c11b3f102c0113b4871044d570fe9ee49d5e2\": container with ID starting with 7d40e523916818e539db57bb074c11b3f102c0113b4871044d570fe9ee49d5e2 not found: ID does not exist" Dec 09 11:47:45 crc kubenswrapper[4849]: I1209 11:47:45.743359 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-76d4cfc555-fqqzj" Dec 09 11:47:46 crc kubenswrapper[4849]: I1209 11:47:46.548839 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa8b5b6-c9f2-40c6-8e55-b465168d380a" path="/var/lib/kubelet/pods/bfa8b5b6-c9f2-40c6-8e55-b465168d380a/volumes" Dec 09 11:47:47 crc kubenswrapper[4849]: I1209 11:47:47.827673 4849 generic.go:334] "Generic (PLEG): container finished" podID="04cc0062-bd46-48a2-b761-f0c8e377cace" containerID="34fc4578940807cbb39415fdd6e679b60e9fcd6e7802674b291a1a2a9a511536" exitCode=0 Dec 09 11:47:47 crc kubenswrapper[4849]: I1209 11:47:47.827748 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04cc0062-bd46-48a2-b761-f0c8e377cace","Type":"ContainerDied","Data":"34fc4578940807cbb39415fdd6e679b60e9fcd6e7802674b291a1a2a9a511536"} Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.096047 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.142995 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-config-data\") pod \"04cc0062-bd46-48a2-b761-f0c8e377cace\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.143099 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04cc0062-bd46-48a2-b761-f0c8e377cace-etc-machine-id\") pod \"04cc0062-bd46-48a2-b761-f0c8e377cace\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.143140 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-combined-ca-bundle\") pod \"04cc0062-bd46-48a2-b761-f0c8e377cace\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.143187 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fk2w\" (UniqueName: \"kubernetes.io/projected/04cc0062-bd46-48a2-b761-f0c8e377cace-kube-api-access-8fk2w\") pod \"04cc0062-bd46-48a2-b761-f0c8e377cace\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.143240 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-config-data-custom\") pod \"04cc0062-bd46-48a2-b761-f0c8e377cace\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.143283 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04cc0062-bd46-48a2-b761-f0c8e377cace-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "04cc0062-bd46-48a2-b761-f0c8e377cace" (UID: "04cc0062-bd46-48a2-b761-f0c8e377cace"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.143293 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-scripts\") pod \"04cc0062-bd46-48a2-b761-f0c8e377cace\" (UID: \"04cc0062-bd46-48a2-b761-f0c8e377cace\") " Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.143801 4849 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04cc0062-bd46-48a2-b761-f0c8e377cace-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.151693 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "04cc0062-bd46-48a2-b761-f0c8e377cace" (UID: "04cc0062-bd46-48a2-b761-f0c8e377cace"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.175200 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-scripts" (OuterVolumeSpecName: "scripts") pod "04cc0062-bd46-48a2-b761-f0c8e377cace" (UID: "04cc0062-bd46-48a2-b761-f0c8e377cace"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.184185 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04cc0062-bd46-48a2-b761-f0c8e377cace-kube-api-access-8fk2w" (OuterVolumeSpecName: "kube-api-access-8fk2w") pod "04cc0062-bd46-48a2-b761-f0c8e377cace" (UID: "04cc0062-bd46-48a2-b761-f0c8e377cace"). InnerVolumeSpecName "kube-api-access-8fk2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.237598 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04cc0062-bd46-48a2-b761-f0c8e377cace" (UID: "04cc0062-bd46-48a2-b761-f0c8e377cace"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.246567 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.246612 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fk2w\" (UniqueName: \"kubernetes.io/projected/04cc0062-bd46-48a2-b761-f0c8e377cace-kube-api-access-8fk2w\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.246625 4849 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.246634 4849 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.284742 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-config-data" (OuterVolumeSpecName: "config-data") pod "04cc0062-bd46-48a2-b761-f0c8e377cace" (UID: "04cc0062-bd46-48a2-b761-f0c8e377cace"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.348150 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cc0062-bd46-48a2-b761-f0c8e377cace-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.838071 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04cc0062-bd46-48a2-b761-f0c8e377cace","Type":"ContainerDied","Data":"61c392c0c02eaee55e8829b4c2ffb52af368ecda7597e1d05c6036b2c08419e5"} Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.838136 4849 scope.go:117] "RemoveContainer" containerID="f59d615456551050541bd4ef0df0f079b19906960161c9dd4417eb68e6b51574" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.838280 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.865881 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.870938 4849 scope.go:117] "RemoveContainer" containerID="34fc4578940807cbb39415fdd6e679b60e9fcd6e7802674b291a1a2a9a511536" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.877834 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.894473 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 11:47:48 crc kubenswrapper[4849]: E1209 11:47:48.894943 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04cc0062-bd46-48a2-b761-f0c8e377cace" containerName="cinder-scheduler" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.894972 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="04cc0062-bd46-48a2-b761-f0c8e377cace" containerName="cinder-scheduler" Dec 09 11:47:48 crc kubenswrapper[4849]: E1209 11:47:48.894998 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42c2434-f83d-4531-9f19-073674ff63dd" containerName="init" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.895009 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42c2434-f83d-4531-9f19-073674ff63dd" containerName="init" Dec 09 11:47:48 crc kubenswrapper[4849]: E1209 11:47:48.895033 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa8b5b6-c9f2-40c6-8e55-b465168d380a" containerName="neutron-httpd" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.895042 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa8b5b6-c9f2-40c6-8e55-b465168d380a" containerName="neutron-httpd" Dec 09 11:47:48 crc kubenswrapper[4849]: E1209 11:47:48.895055 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42c2434-f83d-4531-9f19-073674ff63dd" containerName="dnsmasq-dns" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.895063 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42c2434-f83d-4531-9f19-073674ff63dd" containerName="dnsmasq-dns" Dec 09 11:47:48 crc kubenswrapper[4849]: E1209 11:47:48.895083 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04cc0062-bd46-48a2-b761-f0c8e377cace" containerName="probe" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.895092 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="04cc0062-bd46-48a2-b761-f0c8e377cace" containerName="probe" Dec 09 11:47:48 crc kubenswrapper[4849]: E1209 11:47:48.895106 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa8b5b6-c9f2-40c6-8e55-b465168d380a" containerName="neutron-api" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.895115 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa8b5b6-c9f2-40c6-8e55-b465168d380a" containerName="neutron-api" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.895325 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa8b5b6-c9f2-40c6-8e55-b465168d380a" containerName="neutron-api" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.895351 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42c2434-f83d-4531-9f19-073674ff63dd" containerName="dnsmasq-dns" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.895366 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa8b5b6-c9f2-40c6-8e55-b465168d380a" containerName="neutron-httpd" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.895380 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="04cc0062-bd46-48a2-b761-f0c8e377cace" containerName="cinder-scheduler" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.895390 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="04cc0062-bd46-48a2-b761-f0c8e377cace" containerName="probe" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.896566 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.902033 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.943143 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.965523 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292bc586-9fad-4698-b31f-e65e317ef940-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.965585 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmdfj\" (UniqueName: \"kubernetes.io/projected/292bc586-9fad-4698-b31f-e65e317ef940-kube-api-access-jmdfj\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.965636 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/292bc586-9fad-4698-b31f-e65e317ef940-config-data\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.965679 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/292bc586-9fad-4698-b31f-e65e317ef940-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.965703 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/292bc586-9fad-4698-b31f-e65e317ef940-scripts\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.965762 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/292bc586-9fad-4698-b31f-e65e317ef940-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.972298 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.973757 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.977652 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-75vhv" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.977851 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.978354 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 09 11:47:48 crc kubenswrapper[4849]: I1209 11:47:48.983958 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.066963 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292bc586-9fad-4698-b31f-e65e317ef940-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.067008 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmdfj\" (UniqueName: \"kubernetes.io/projected/292bc586-9fad-4698-b31f-e65e317ef940-kube-api-access-jmdfj\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.067047 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/292bc586-9fad-4698-b31f-e65e317ef940-config-data\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.067068 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/39476746-c540-4e2e-b31c-de35ea3d9ec1-openstack-config-secret\") pod \"openstackclient\" (UID: \"39476746-c540-4e2e-b31c-de35ea3d9ec1\") " pod="openstack/openstackclient" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.067083 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39476746-c540-4e2e-b31c-de35ea3d9ec1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"39476746-c540-4e2e-b31c-de35ea3d9ec1\") " pod="openstack/openstackclient" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.067128 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8jd8\" (UniqueName: \"kubernetes.io/projected/39476746-c540-4e2e-b31c-de35ea3d9ec1-kube-api-access-d8jd8\") pod \"openstackclient\" (UID: \"39476746-c540-4e2e-b31c-de35ea3d9ec1\") " pod="openstack/openstackclient" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.067148 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/292bc586-9fad-4698-b31f-e65e317ef940-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.067168 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/292bc586-9fad-4698-b31f-e65e317ef940-scripts\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.067215 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/39476746-c540-4e2e-b31c-de35ea3d9ec1-openstack-config\") pod \"openstackclient\" (UID: \"39476746-c540-4e2e-b31c-de35ea3d9ec1\") " pod="openstack/openstackclient" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.067234 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/292bc586-9fad-4698-b31f-e65e317ef940-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.067321 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/292bc586-9fad-4698-b31f-e65e317ef940-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.076104 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/292bc586-9fad-4698-b31f-e65e317ef940-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.076385 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/292bc586-9fad-4698-b31f-e65e317ef940-config-data\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.076810 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292bc586-9fad-4698-b31f-e65e317ef940-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.077085 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/292bc586-9fad-4698-b31f-e65e317ef940-scripts\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.085449 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmdfj\" (UniqueName: \"kubernetes.io/projected/292bc586-9fad-4698-b31f-e65e317ef940-kube-api-access-jmdfj\") pod \"cinder-scheduler-0\" (UID: \"292bc586-9fad-4698-b31f-e65e317ef940\") " pod="openstack/cinder-scheduler-0" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.168296 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/39476746-c540-4e2e-b31c-de35ea3d9ec1-openstack-config-secret\") pod \"openstackclient\" (UID: \"39476746-c540-4e2e-b31c-de35ea3d9ec1\") " pod="openstack/openstackclient" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.168340 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39476746-c540-4e2e-b31c-de35ea3d9ec1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"39476746-c540-4e2e-b31c-de35ea3d9ec1\") " pod="openstack/openstackclient" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.168369 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8jd8\" (UniqueName: \"kubernetes.io/projected/39476746-c540-4e2e-b31c-de35ea3d9ec1-kube-api-access-d8jd8\") pod \"openstackclient\" (UID: \"39476746-c540-4e2e-b31c-de35ea3d9ec1\") " pod="openstack/openstackclient" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.168445 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/39476746-c540-4e2e-b31c-de35ea3d9ec1-openstack-config\") pod \"openstackclient\" (UID: \"39476746-c540-4e2e-b31c-de35ea3d9ec1\") " pod="openstack/openstackclient" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.169354 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/39476746-c540-4e2e-b31c-de35ea3d9ec1-openstack-config\") pod \"openstackclient\" (UID: \"39476746-c540-4e2e-b31c-de35ea3d9ec1\") " pod="openstack/openstackclient" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.179060 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/39476746-c540-4e2e-b31c-de35ea3d9ec1-openstack-config-secret\") pod \"openstackclient\" (UID: \"39476746-c540-4e2e-b31c-de35ea3d9ec1\") " pod="openstack/openstackclient" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.183087 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39476746-c540-4e2e-b31c-de35ea3d9ec1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"39476746-c540-4e2e-b31c-de35ea3d9ec1\") " pod="openstack/openstackclient" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.197081 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8jd8\" (UniqueName: \"kubernetes.io/projected/39476746-c540-4e2e-b31c-de35ea3d9ec1-kube-api-access-d8jd8\") pod \"openstackclient\" (UID: \"39476746-c540-4e2e-b31c-de35ea3d9ec1\") " pod="openstack/openstackclient" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.225884 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.294209 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 11:47:49 crc kubenswrapper[4849]: I1209 11:47:49.858258 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 11:47:49 crc kubenswrapper[4849]: W1209 11:47:49.867648 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod292bc586_9fad_4698_b31f_e65e317ef940.slice/crio-b48370c011565b9a9beefbb8a84c06d96fd865425c4952e9fdf1da4c861f8a58 WatchSource:0}: Error finding container b48370c011565b9a9beefbb8a84c06d96fd865425c4952e9fdf1da4c861f8a58: Status 404 returned error can't find the container with id b48370c011565b9a9beefbb8a84c06d96fd865425c4952e9fdf1da4c861f8a58 Dec 09 11:47:50 crc kubenswrapper[4849]: I1209 11:47:50.005698 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 09 11:47:50 crc kubenswrapper[4849]: W1209 11:47:50.017224 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39476746_c540_4e2e_b31c_de35ea3d9ec1.slice/crio-f203909f280407db265a1cc652467f7e741f3bb82802877a5b549fd0a023ec3c WatchSource:0}: Error finding container f203909f280407db265a1cc652467f7e741f3bb82802877a5b549fd0a023ec3c: Status 404 returned error can't find the container with id f203909f280407db265a1cc652467f7e741f3bb82802877a5b549fd0a023ec3c Dec 09 11:47:50 crc kubenswrapper[4849]: I1209 11:47:50.549706 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04cc0062-bd46-48a2-b761-f0c8e377cace" path="/var/lib/kubelet/pods/04cc0062-bd46-48a2-b761-f0c8e377cace/volumes" Dec 09 11:47:50 crc kubenswrapper[4849]: I1209 11:47:50.873120 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"39476746-c540-4e2e-b31c-de35ea3d9ec1","Type":"ContainerStarted","Data":"f203909f280407db265a1cc652467f7e741f3bb82802877a5b549fd0a023ec3c"} Dec 09 11:47:50 crc kubenswrapper[4849]: I1209 11:47:50.876125 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"292bc586-9fad-4698-b31f-e65e317ef940","Type":"ContainerStarted","Data":"7715ae1d50bcc5293736635443801ff0f051681f461f4aab32e957edab51df32"} Dec 09 11:47:50 crc kubenswrapper[4849]: I1209 11:47:50.876166 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"292bc586-9fad-4698-b31f-e65e317ef940","Type":"ContainerStarted","Data":"b48370c011565b9a9beefbb8a84c06d96fd865425c4952e9fdf1da4c861f8a58"} Dec 09 11:47:51 crc kubenswrapper[4849]: I1209 11:47:51.135597 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:47:51 crc kubenswrapper[4849]: I1209 11:47:51.135667 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:47:51 crc kubenswrapper[4849]: I1209 11:47:51.452952 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 09 11:47:52 crc kubenswrapper[4849]: I1209 11:47:52.062012 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"292bc586-9fad-4698-b31f-e65e317ef940","Type":"ContainerStarted","Data":"8f3067c8401b804e77f5e950275f9cbfc6a86961f2f50f14e02bcbd325a36c23"} Dec 09 11:47:52 crc kubenswrapper[4849]: I1209 11:47:52.184836 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.184813944 podStartE2EDuration="4.184813944s" podCreationTimestamp="2025-12-09 11:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:47:52.167527623 +0000 UTC m=+1254.707411959" watchObservedRunningTime="2025-12-09 11:47:52.184813944 +0000 UTC m=+1254.724698260" Dec 09 11:47:54 crc kubenswrapper[4849]: I1209 11:47:54.228466 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 09 11:47:58 crc kubenswrapper[4849]: I1209 11:47:58.122109 4849 generic.go:334] "Generic (PLEG): container finished" podID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerID="68e09ecef5bbf909e9b8e614e67ff59b33bef9fb24004bca2ace8a5917ad795d" exitCode=137 Dec 09 11:47:58 crc kubenswrapper[4849]: I1209 11:47:58.122196 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75d0bfb8-146b-4d21-8e81-f2cef3d99489","Type":"ContainerDied","Data":"68e09ecef5bbf909e9b8e614e67ff59b33bef9fb24004bca2ace8a5917ad795d"} Dec 09 11:47:59 crc kubenswrapper[4849]: I1209 11:47:59.587975 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 09 11:48:01 crc kubenswrapper[4849]: I1209 11:48:01.676144 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.134:3000/\": dial tcp 10.217.0.134:3000: connect: connection refused" Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.619669 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.762828 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-scripts\") pod \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.762900 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-config-data\") pod \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.762938 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-sg-core-conf-yaml\") pod \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.763000 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d0bfb8-146b-4d21-8e81-f2cef3d99489-run-httpd\") pod \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.763025 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d0bfb8-146b-4d21-8e81-f2cef3d99489-log-httpd\") pod \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.763080 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-combined-ca-bundle\") pod \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.763107 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hx2l\" (UniqueName: \"kubernetes.io/projected/75d0bfb8-146b-4d21-8e81-f2cef3d99489-kube-api-access-8hx2l\") pod \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\" (UID: \"75d0bfb8-146b-4d21-8e81-f2cef3d99489\") " Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.763780 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75d0bfb8-146b-4d21-8e81-f2cef3d99489-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "75d0bfb8-146b-4d21-8e81-f2cef3d99489" (UID: "75d0bfb8-146b-4d21-8e81-f2cef3d99489"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.764239 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75d0bfb8-146b-4d21-8e81-f2cef3d99489-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "75d0bfb8-146b-4d21-8e81-f2cef3d99489" (UID: "75d0bfb8-146b-4d21-8e81-f2cef3d99489"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.778102 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d0bfb8-146b-4d21-8e81-f2cef3d99489-kube-api-access-8hx2l" (OuterVolumeSpecName: "kube-api-access-8hx2l") pod "75d0bfb8-146b-4d21-8e81-f2cef3d99489" (UID: "75d0bfb8-146b-4d21-8e81-f2cef3d99489"). InnerVolumeSpecName "kube-api-access-8hx2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.778665 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-scripts" (OuterVolumeSpecName: "scripts") pod "75d0bfb8-146b-4d21-8e81-f2cef3d99489" (UID: "75d0bfb8-146b-4d21-8e81-f2cef3d99489"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.824645 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "75d0bfb8-146b-4d21-8e81-f2cef3d99489" (UID: "75d0bfb8-146b-4d21-8e81-f2cef3d99489"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.868897 4849 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.868940 4849 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d0bfb8-146b-4d21-8e81-f2cef3d99489-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.868952 4849 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d0bfb8-146b-4d21-8e81-f2cef3d99489-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.868963 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hx2l\" (UniqueName: \"kubernetes.io/projected/75d0bfb8-146b-4d21-8e81-f2cef3d99489-kube-api-access-8hx2l\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.868974 4849 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.891606 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75d0bfb8-146b-4d21-8e81-f2cef3d99489" (UID: "75d0bfb8-146b-4d21-8e81-f2cef3d99489"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.900839 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-config-data" (OuterVolumeSpecName: "config-data") pod "75d0bfb8-146b-4d21-8e81-f2cef3d99489" (UID: "75d0bfb8-146b-4d21-8e81-f2cef3d99489"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.970703 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:04 crc kubenswrapper[4849]: I1209 11:48:04.970916 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d0bfb8-146b-4d21-8e81-f2cef3d99489-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.232347 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"39476746-c540-4e2e-b31c-de35ea3d9ec1","Type":"ContainerStarted","Data":"f5266bc2d300f776dc342cb16393a476e8c88dcb250f8ca87b6445e26c0f9b12"} Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.235222 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75d0bfb8-146b-4d21-8e81-f2cef3d99489","Type":"ContainerDied","Data":"2346cc5b0e7f52a8a445bfef6562b88ce3c34649f23e418ac9450a56fe0aa729"} Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.235452 4849 scope.go:117] "RemoveContainer" containerID="68e09ecef5bbf909e9b8e614e67ff59b33bef9fb24004bca2ace8a5917ad795d" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.235627 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.255728 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.104526944 podStartE2EDuration="17.25570175s" podCreationTimestamp="2025-12-09 11:47:48 +0000 UTC" firstStartedPulling="2025-12-09 11:47:50.019063065 +0000 UTC m=+1252.558947381" lastFinishedPulling="2025-12-09 11:48:04.170237871 +0000 UTC m=+1266.710122187" observedRunningTime="2025-12-09 11:48:05.249795363 +0000 UTC m=+1267.789679689" watchObservedRunningTime="2025-12-09 11:48:05.25570175 +0000 UTC m=+1267.795586066" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.285832 4849 scope.go:117] "RemoveContainer" containerID="c8d08fbfc9bdd67b0ee3fc32004f074de1c0f231de61755e58dbe18cbfd14b26" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.298116 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.313326 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.321159 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:48:05 crc kubenswrapper[4849]: E1209 11:48:05.321534 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="sg-core" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.321548 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="sg-core" Dec 09 11:48:05 crc kubenswrapper[4849]: E1209 11:48:05.321565 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="ceilometer-central-agent" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.321572 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="ceilometer-central-agent" Dec 09 11:48:05 crc kubenswrapper[4849]: E1209 11:48:05.321599 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="proxy-httpd" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.321604 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="proxy-httpd" Dec 09 11:48:05 crc kubenswrapper[4849]: E1209 11:48:05.321617 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="ceilometer-notification-agent" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.321625 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="ceilometer-notification-agent" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.321788 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="ceilometer-notification-agent" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.321804 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="ceilometer-central-agent" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.321811 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="proxy-httpd" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.321821 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" containerName="sg-core" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.323471 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.329818 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.330029 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.337919 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.346515 4849 scope.go:117] "RemoveContainer" containerID="fe801c71078b298ecdbcd0b422e346c62d4a765cf23f377f60d3dfccbfa16066" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.384802 4849 scope.go:117] "RemoveContainer" containerID="d737fa8f20ae3532085d512f58419e3598cd420f88a3ababdce2de078f5c00fa" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.480229 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.480301 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-log-httpd\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.480329 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-run-httpd\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.480393 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.480427 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vdpd\" (UniqueName: \"kubernetes.io/projected/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-kube-api-access-6vdpd\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.480452 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-config-data\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.480494 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-scripts\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.581690 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-run-httpd\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.582069 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.582165 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vdpd\" (UniqueName: \"kubernetes.io/projected/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-kube-api-access-6vdpd\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.582258 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-config-data\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.582378 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-scripts\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.582855 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.583017 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-log-httpd\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.584514 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-run-httpd\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.586072 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-log-httpd\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.590467 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-config-data\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.592188 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.595355 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-scripts\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.599918 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.618346 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vdpd\" (UniqueName: \"kubernetes.io/projected/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-kube-api-access-6vdpd\") pod \"ceilometer-0\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " pod="openstack/ceilometer-0" Dec 09 11:48:05 crc kubenswrapper[4849]: I1209 11:48:05.650168 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:48:06 crc kubenswrapper[4849]: I1209 11:48:06.193267 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:48:06 crc kubenswrapper[4849]: I1209 11:48:06.243734 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537","Type":"ContainerStarted","Data":"3bc7ff0026ea94d171e101f02a8e4372447240d1db9c273477312d3926c99a67"} Dec 09 11:48:06 crc kubenswrapper[4849]: I1209 11:48:06.549817 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d0bfb8-146b-4d21-8e81-f2cef3d99489" path="/var/lib/kubelet/pods/75d0bfb8-146b-4d21-8e81-f2cef3d99489/volumes" Dec 09 11:48:07 crc kubenswrapper[4849]: I1209 11:48:07.293745 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537","Type":"ContainerStarted","Data":"9679952b5fc6c2ec3a4ed1ff2d932efcc0075c44e81c3b5993ba0b9c06c7460b"} Dec 09 11:48:08 crc kubenswrapper[4849]: I1209 11:48:08.016980 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:48:09 crc kubenswrapper[4849]: I1209 11:48:09.312589 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537","Type":"ContainerStarted","Data":"3a2caebb8bc9366f18135518bc0a247f015ccaa0d6578aac2ccb50da913ed52b"} Dec 09 11:48:10 crc kubenswrapper[4849]: I1209 11:48:10.323017 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537","Type":"ContainerStarted","Data":"9fd1977b88fb17ff4953c855aef5c1ce16ff8970575e546c7f37d26d8fd3fa18"} Dec 09 11:48:11 crc kubenswrapper[4849]: I1209 11:48:11.334719 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537","Type":"ContainerStarted","Data":"ccfba7a1aca585ebde17a94b5316d040cbfaf87c9ee2b2201bca575fd55ce801"} Dec 09 11:48:11 crc kubenswrapper[4849]: I1209 11:48:11.336204 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 11:48:11 crc kubenswrapper[4849]: I1209 11:48:11.335565 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="proxy-httpd" containerID="cri-o://ccfba7a1aca585ebde17a94b5316d040cbfaf87c9ee2b2201bca575fd55ce801" gracePeriod=30 Dec 09 11:48:11 crc kubenswrapper[4849]: I1209 11:48:11.334972 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="ceilometer-central-agent" containerID="cri-o://9679952b5fc6c2ec3a4ed1ff2d932efcc0075c44e81c3b5993ba0b9c06c7460b" gracePeriod=30 Dec 09 11:48:11 crc kubenswrapper[4849]: I1209 11:48:11.335597 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="ceilometer-notification-agent" containerID="cri-o://3a2caebb8bc9366f18135518bc0a247f015ccaa0d6578aac2ccb50da913ed52b" gracePeriod=30 Dec 09 11:48:11 crc kubenswrapper[4849]: I1209 11:48:11.335582 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="sg-core" containerID="cri-o://9fd1977b88fb17ff4953c855aef5c1ce16ff8970575e546c7f37d26d8fd3fa18" gracePeriod=30 Dec 09 11:48:11 crc kubenswrapper[4849]: I1209 11:48:11.375899 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.547428246 podStartE2EDuration="6.375876325s" podCreationTimestamp="2025-12-09 11:48:05 +0000 UTC" firstStartedPulling="2025-12-09 11:48:06.177528585 +0000 UTC m=+1268.717412891" lastFinishedPulling="2025-12-09 11:48:11.005976654 +0000 UTC m=+1273.545860970" observedRunningTime="2025-12-09 11:48:11.366947402 +0000 UTC m=+1273.906831728" watchObservedRunningTime="2025-12-09 11:48:11.375876325 +0000 UTC m=+1273.915760651" Dec 09 11:48:12 crc kubenswrapper[4849]: I1209 11:48:12.345621 4849 generic.go:334] "Generic (PLEG): container finished" podID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerID="9fd1977b88fb17ff4953c855aef5c1ce16ff8970575e546c7f37d26d8fd3fa18" exitCode=2 Dec 09 11:48:12 crc kubenswrapper[4849]: I1209 11:48:12.345660 4849 generic.go:334] "Generic (PLEG): container finished" podID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerID="3a2caebb8bc9366f18135518bc0a247f015ccaa0d6578aac2ccb50da913ed52b" exitCode=0 Dec 09 11:48:12 crc kubenswrapper[4849]: I1209 11:48:12.345671 4849 generic.go:334] "Generic (PLEG): container finished" podID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerID="9679952b5fc6c2ec3a4ed1ff2d932efcc0075c44e81c3b5993ba0b9c06c7460b" exitCode=0 Dec 09 11:48:12 crc kubenswrapper[4849]: I1209 11:48:12.345680 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537","Type":"ContainerDied","Data":"9fd1977b88fb17ff4953c855aef5c1ce16ff8970575e546c7f37d26d8fd3fa18"} Dec 09 11:48:12 crc kubenswrapper[4849]: I1209 11:48:12.345720 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537","Type":"ContainerDied","Data":"3a2caebb8bc9366f18135518bc0a247f015ccaa0d6578aac2ccb50da913ed52b"} Dec 09 11:48:12 crc kubenswrapper[4849]: I1209 11:48:12.345732 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537","Type":"ContainerDied","Data":"9679952b5fc6c2ec3a4ed1ff2d932efcc0075c44e81c3b5993ba0b9c06c7460b"} Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.133225 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.133922 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.325679 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-8gtvk"] Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.326794 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8gtvk" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.343312 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8gtvk"] Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.354709 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3150-account-create-update-zd6b9"] Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.356134 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3150-account-create-update-zd6b9" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.358752 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.371544 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3150-account-create-update-zd6b9"] Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.445577 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-vf9pp"] Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.446736 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vf9pp" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.456818 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vf9pp"] Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.522381 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0b7c503-fd52-4bc9-88f0-a37f6346916d-operator-scripts\") pod \"nova-api-3150-account-create-update-zd6b9\" (UID: \"f0b7c503-fd52-4bc9-88f0-a37f6346916d\") " pod="openstack/nova-api-3150-account-create-update-zd6b9" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.522483 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73e98d76-209b-48ef-bb19-c996e1fd5fbb-operator-scripts\") pod \"nova-api-db-create-8gtvk\" (UID: \"73e98d76-209b-48ef-bb19-c996e1fd5fbb\") " pod="openstack/nova-api-db-create-8gtvk" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.522515 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl6zs\" (UniqueName: \"kubernetes.io/projected/f0b7c503-fd52-4bc9-88f0-a37f6346916d-kube-api-access-gl6zs\") pod \"nova-api-3150-account-create-update-zd6b9\" (UID: \"f0b7c503-fd52-4bc9-88f0-a37f6346916d\") " pod="openstack/nova-api-3150-account-create-update-zd6b9" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.522558 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4fh7\" (UniqueName: \"kubernetes.io/projected/73e98d76-209b-48ef-bb19-c996e1fd5fbb-kube-api-access-m4fh7\") pod \"nova-api-db-create-8gtvk\" (UID: \"73e98d76-209b-48ef-bb19-c996e1fd5fbb\") " pod="openstack/nova-api-db-create-8gtvk" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.522654 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddxg2\" (UniqueName: \"kubernetes.io/projected/6c7247c3-8d63-4753-969f-dcdb4eea86d1-kube-api-access-ddxg2\") pod \"nova-cell0-db-create-vf9pp\" (UID: \"6c7247c3-8d63-4753-969f-dcdb4eea86d1\") " pod="openstack/nova-cell0-db-create-vf9pp" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.522699 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c7247c3-8d63-4753-969f-dcdb4eea86d1-operator-scripts\") pod \"nova-cell0-db-create-vf9pp\" (UID: \"6c7247c3-8d63-4753-969f-dcdb4eea86d1\") " pod="openstack/nova-cell0-db-create-vf9pp" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.545572 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-618a-account-create-update-j7swh"] Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.546934 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-618a-account-create-update-j7swh" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.549833 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.553599 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-bkvhb"] Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.562117 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bkvhb" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.577959 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bkvhb"] Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.627565 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddxg2\" (UniqueName: \"kubernetes.io/projected/6c7247c3-8d63-4753-969f-dcdb4eea86d1-kube-api-access-ddxg2\") pod \"nova-cell0-db-create-vf9pp\" (UID: \"6c7247c3-8d63-4753-969f-dcdb4eea86d1\") " pod="openstack/nova-cell0-db-create-vf9pp" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.627952 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c7247c3-8d63-4753-969f-dcdb4eea86d1-operator-scripts\") pod \"nova-cell0-db-create-vf9pp\" (UID: \"6c7247c3-8d63-4753-969f-dcdb4eea86d1\") " pod="openstack/nova-cell0-db-create-vf9pp" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.628014 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0b7c503-fd52-4bc9-88f0-a37f6346916d-operator-scripts\") pod \"nova-api-3150-account-create-update-zd6b9\" (UID: \"f0b7c503-fd52-4bc9-88f0-a37f6346916d\") " pod="openstack/nova-api-3150-account-create-update-zd6b9" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.628065 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73e98d76-209b-48ef-bb19-c996e1fd5fbb-operator-scripts\") pod \"nova-api-db-create-8gtvk\" (UID: \"73e98d76-209b-48ef-bb19-c996e1fd5fbb\") " pod="openstack/nova-api-db-create-8gtvk" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.628094 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl6zs\" (UniqueName: \"kubernetes.io/projected/f0b7c503-fd52-4bc9-88f0-a37f6346916d-kube-api-access-gl6zs\") pod \"nova-api-3150-account-create-update-zd6b9\" (UID: \"f0b7c503-fd52-4bc9-88f0-a37f6346916d\") " pod="openstack/nova-api-3150-account-create-update-zd6b9" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.628138 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4fh7\" (UniqueName: \"kubernetes.io/projected/73e98d76-209b-48ef-bb19-c996e1fd5fbb-kube-api-access-m4fh7\") pod \"nova-api-db-create-8gtvk\" (UID: \"73e98d76-209b-48ef-bb19-c996e1fd5fbb\") " pod="openstack/nova-api-db-create-8gtvk" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.633473 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73e98d76-209b-48ef-bb19-c996e1fd5fbb-operator-scripts\") pod \"nova-api-db-create-8gtvk\" (UID: \"73e98d76-209b-48ef-bb19-c996e1fd5fbb\") " pod="openstack/nova-api-db-create-8gtvk" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.633500 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c7247c3-8d63-4753-969f-dcdb4eea86d1-operator-scripts\") pod \"nova-cell0-db-create-vf9pp\" (UID: \"6c7247c3-8d63-4753-969f-dcdb4eea86d1\") " pod="openstack/nova-cell0-db-create-vf9pp" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.634032 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0b7c503-fd52-4bc9-88f0-a37f6346916d-operator-scripts\") pod \"nova-api-3150-account-create-update-zd6b9\" (UID: \"f0b7c503-fd52-4bc9-88f0-a37f6346916d\") " pod="openstack/nova-api-3150-account-create-update-zd6b9" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.645625 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-618a-account-create-update-j7swh"] Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.654147 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddxg2\" (UniqueName: \"kubernetes.io/projected/6c7247c3-8d63-4753-969f-dcdb4eea86d1-kube-api-access-ddxg2\") pod \"nova-cell0-db-create-vf9pp\" (UID: \"6c7247c3-8d63-4753-969f-dcdb4eea86d1\") " pod="openstack/nova-cell0-db-create-vf9pp" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.659073 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4fh7\" (UniqueName: \"kubernetes.io/projected/73e98d76-209b-48ef-bb19-c996e1fd5fbb-kube-api-access-m4fh7\") pod \"nova-api-db-create-8gtvk\" (UID: \"73e98d76-209b-48ef-bb19-c996e1fd5fbb\") " pod="openstack/nova-api-db-create-8gtvk" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.681090 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl6zs\" (UniqueName: \"kubernetes.io/projected/f0b7c503-fd52-4bc9-88f0-a37f6346916d-kube-api-access-gl6zs\") pod \"nova-api-3150-account-create-update-zd6b9\" (UID: \"f0b7c503-fd52-4bc9-88f0-a37f6346916d\") " pod="openstack/nova-api-3150-account-create-update-zd6b9" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.729379 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc-operator-scripts\") pod \"nova-cell1-db-create-bkvhb\" (UID: \"2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc\") " pod="openstack/nova-cell1-db-create-bkvhb" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.729465 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cfdbceb-1c2b-482d-b331-06c839bee145-operator-scripts\") pod \"nova-cell0-618a-account-create-update-j7swh\" (UID: \"7cfdbceb-1c2b-482d-b331-06c839bee145\") " pod="openstack/nova-cell0-618a-account-create-update-j7swh" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.729497 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmvxs\" (UniqueName: \"kubernetes.io/projected/2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc-kube-api-access-kmvxs\") pod \"nova-cell1-db-create-bkvhb\" (UID: \"2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc\") " pod="openstack/nova-cell1-db-create-bkvhb" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.729555 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4dvh\" (UniqueName: \"kubernetes.io/projected/7cfdbceb-1c2b-482d-b331-06c839bee145-kube-api-access-g4dvh\") pod \"nova-cell0-618a-account-create-update-j7swh\" (UID: \"7cfdbceb-1c2b-482d-b331-06c839bee145\") " pod="openstack/nova-cell0-618a-account-create-update-j7swh" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.743295 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5458-account-create-update-cn6mz"] Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.749394 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5458-account-create-update-cn6mz" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.760058 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.765462 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vf9pp" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.769949 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5458-account-create-update-cn6mz"] Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.831800 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc-operator-scripts\") pod \"nova-cell1-db-create-bkvhb\" (UID: \"2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc\") " pod="openstack/nova-cell1-db-create-bkvhb" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.831868 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cfdbceb-1c2b-482d-b331-06c839bee145-operator-scripts\") pod \"nova-cell0-618a-account-create-update-j7swh\" (UID: \"7cfdbceb-1c2b-482d-b331-06c839bee145\") " pod="openstack/nova-cell0-618a-account-create-update-j7swh" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.831898 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmvxs\" (UniqueName: \"kubernetes.io/projected/2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc-kube-api-access-kmvxs\") pod \"nova-cell1-db-create-bkvhb\" (UID: \"2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc\") " pod="openstack/nova-cell1-db-create-bkvhb" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.831969 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4dvh\" (UniqueName: \"kubernetes.io/projected/7cfdbceb-1c2b-482d-b331-06c839bee145-kube-api-access-g4dvh\") pod \"nova-cell0-618a-account-create-update-j7swh\" (UID: \"7cfdbceb-1c2b-482d-b331-06c839bee145\") " pod="openstack/nova-cell0-618a-account-create-update-j7swh" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.833125 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc-operator-scripts\") pod \"nova-cell1-db-create-bkvhb\" (UID: \"2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc\") " pod="openstack/nova-cell1-db-create-bkvhb" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.833817 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cfdbceb-1c2b-482d-b331-06c839bee145-operator-scripts\") pod \"nova-cell0-618a-account-create-update-j7swh\" (UID: \"7cfdbceb-1c2b-482d-b331-06c839bee145\") " pod="openstack/nova-cell0-618a-account-create-update-j7swh" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.848822 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmvxs\" (UniqueName: \"kubernetes.io/projected/2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc-kube-api-access-kmvxs\") pod \"nova-cell1-db-create-bkvhb\" (UID: \"2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc\") " pod="openstack/nova-cell1-db-create-bkvhb" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.856107 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4dvh\" (UniqueName: \"kubernetes.io/projected/7cfdbceb-1c2b-482d-b331-06c839bee145-kube-api-access-g4dvh\") pod \"nova-cell0-618a-account-create-update-j7swh\" (UID: \"7cfdbceb-1c2b-482d-b331-06c839bee145\") " pod="openstack/nova-cell0-618a-account-create-update-j7swh" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.863821 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-618a-account-create-update-j7swh" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.878717 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bkvhb" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.933684 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f696cb20-6d6a-467c-9963-4ea7bd4bb894-operator-scripts\") pod \"nova-cell1-5458-account-create-update-cn6mz\" (UID: \"f696cb20-6d6a-467c-9963-4ea7bd4bb894\") " pod="openstack/nova-cell1-5458-account-create-update-cn6mz" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.934113 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb7q2\" (UniqueName: \"kubernetes.io/projected/f696cb20-6d6a-467c-9963-4ea7bd4bb894-kube-api-access-xb7q2\") pod \"nova-cell1-5458-account-create-update-cn6mz\" (UID: \"f696cb20-6d6a-467c-9963-4ea7bd4bb894\") " pod="openstack/nova-cell1-5458-account-create-update-cn6mz" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.949530 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8gtvk" Dec 09 11:48:21 crc kubenswrapper[4849]: I1209 11:48:21.973687 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3150-account-create-update-zd6b9" Dec 09 11:48:22 crc kubenswrapper[4849]: I1209 11:48:22.037475 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb7q2\" (UniqueName: \"kubernetes.io/projected/f696cb20-6d6a-467c-9963-4ea7bd4bb894-kube-api-access-xb7q2\") pod \"nova-cell1-5458-account-create-update-cn6mz\" (UID: \"f696cb20-6d6a-467c-9963-4ea7bd4bb894\") " pod="openstack/nova-cell1-5458-account-create-update-cn6mz" Dec 09 11:48:22 crc kubenswrapper[4849]: I1209 11:48:22.037569 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f696cb20-6d6a-467c-9963-4ea7bd4bb894-operator-scripts\") pod \"nova-cell1-5458-account-create-update-cn6mz\" (UID: \"f696cb20-6d6a-467c-9963-4ea7bd4bb894\") " pod="openstack/nova-cell1-5458-account-create-update-cn6mz" Dec 09 11:48:22 crc kubenswrapper[4849]: I1209 11:48:22.039044 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f696cb20-6d6a-467c-9963-4ea7bd4bb894-operator-scripts\") pod \"nova-cell1-5458-account-create-update-cn6mz\" (UID: \"f696cb20-6d6a-467c-9963-4ea7bd4bb894\") " pod="openstack/nova-cell1-5458-account-create-update-cn6mz" Dec 09 11:48:22 crc kubenswrapper[4849]: I1209 11:48:22.064103 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb7q2\" (UniqueName: \"kubernetes.io/projected/f696cb20-6d6a-467c-9963-4ea7bd4bb894-kube-api-access-xb7q2\") pod \"nova-cell1-5458-account-create-update-cn6mz\" (UID: \"f696cb20-6d6a-467c-9963-4ea7bd4bb894\") " pod="openstack/nova-cell1-5458-account-create-update-cn6mz" Dec 09 11:48:22 crc kubenswrapper[4849]: I1209 11:48:22.088387 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5458-account-create-update-cn6mz" Dec 09 11:48:22 crc kubenswrapper[4849]: I1209 11:48:22.534566 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vf9pp"] Dec 09 11:48:22 crc kubenswrapper[4849]: W1209 11:48:22.536293 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c7247c3_8d63_4753_969f_dcdb4eea86d1.slice/crio-1e4e196f1e3bf5fe6b92c248a6d9864a8e6ac2eb1130679a7973fb2bc620bfd1 WatchSource:0}: Error finding container 1e4e196f1e3bf5fe6b92c248a6d9864a8e6ac2eb1130679a7973fb2bc620bfd1: Status 404 returned error can't find the container with id 1e4e196f1e3bf5fe6b92c248a6d9864a8e6ac2eb1130679a7973fb2bc620bfd1 Dec 09 11:48:22 crc kubenswrapper[4849]: I1209 11:48:22.792249 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-618a-account-create-update-j7swh"] Dec 09 11:48:22 crc kubenswrapper[4849]: W1209 11:48:22.811044 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a588afb_8e8a_4b60_96f7_7d24d4b5a5fc.slice/crio-6e13b628e10c5a1d26b520d484f68f2e017bdc08cd3260c28a97f73a6a0e52b6 WatchSource:0}: Error finding container 6e13b628e10c5a1d26b520d484f68f2e017bdc08cd3260c28a97f73a6a0e52b6: Status 404 returned error can't find the container with id 6e13b628e10c5a1d26b520d484f68f2e017bdc08cd3260c28a97f73a6a0e52b6 Dec 09 11:48:22 crc kubenswrapper[4849]: I1209 11:48:22.814757 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bkvhb"] Dec 09 11:48:22 crc kubenswrapper[4849]: I1209 11:48:22.835015 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8gtvk"] Dec 09 11:48:22 crc kubenswrapper[4849]: I1209 11:48:22.913680 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5458-account-create-update-cn6mz"] Dec 09 11:48:22 crc kubenswrapper[4849]: W1209 11:48:22.922374 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf696cb20_6d6a_467c_9963_4ea7bd4bb894.slice/crio-1793a62f36db79e286e843cf05da31a21a4c72a7415bab2e747abd190daf5f6f WatchSource:0}: Error finding container 1793a62f36db79e286e843cf05da31a21a4c72a7415bab2e747abd190daf5f6f: Status 404 returned error can't find the container with id 1793a62f36db79e286e843cf05da31a21a4c72a7415bab2e747abd190daf5f6f Dec 09 11:48:22 crc kubenswrapper[4849]: I1209 11:48:22.944520 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3150-account-create-update-zd6b9"] Dec 09 11:48:22 crc kubenswrapper[4849]: W1209 11:48:22.995580 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b7c503_fd52_4bc9_88f0_a37f6346916d.slice/crio-55dfef7eaeaa131d9933ae3a35aef0f5c5446a2fd9aabf2e88b0e60ed5f9478d WatchSource:0}: Error finding container 55dfef7eaeaa131d9933ae3a35aef0f5c5446a2fd9aabf2e88b0e60ed5f9478d: Status 404 returned error can't find the container with id 55dfef7eaeaa131d9933ae3a35aef0f5c5446a2fd9aabf2e88b0e60ed5f9478d Dec 09 11:48:23 crc kubenswrapper[4849]: I1209 11:48:23.466083 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3150-account-create-update-zd6b9" event={"ID":"f0b7c503-fd52-4bc9-88f0-a37f6346916d","Type":"ContainerStarted","Data":"55dfef7eaeaa131d9933ae3a35aef0f5c5446a2fd9aabf2e88b0e60ed5f9478d"} Dec 09 11:48:23 crc kubenswrapper[4849]: I1209 11:48:23.468726 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vf9pp" event={"ID":"6c7247c3-8d63-4753-969f-dcdb4eea86d1","Type":"ContainerStarted","Data":"0e6155a514ff9498bc70089ea56f264c6c2822bbb855cf7be274cc45e371b6a3"} Dec 09 11:48:23 crc kubenswrapper[4849]: I1209 11:48:23.468757 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vf9pp" event={"ID":"6c7247c3-8d63-4753-969f-dcdb4eea86d1","Type":"ContainerStarted","Data":"1e4e196f1e3bf5fe6b92c248a6d9864a8e6ac2eb1130679a7973fb2bc620bfd1"} Dec 09 11:48:23 crc kubenswrapper[4849]: I1209 11:48:23.481173 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5458-account-create-update-cn6mz" event={"ID":"f696cb20-6d6a-467c-9963-4ea7bd4bb894","Type":"ContainerStarted","Data":"1793a62f36db79e286e843cf05da31a21a4c72a7415bab2e747abd190daf5f6f"} Dec 09 11:48:23 crc kubenswrapper[4849]: I1209 11:48:23.483362 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8gtvk" event={"ID":"73e98d76-209b-48ef-bb19-c996e1fd5fbb","Type":"ContainerStarted","Data":"8e9cb1ca53792e061a66142b849b326d821bb5978dde3d3c9dcb6a9238a3b2b3"} Dec 09 11:48:23 crc kubenswrapper[4849]: I1209 11:48:23.489812 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bkvhb" event={"ID":"2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc","Type":"ContainerStarted","Data":"6e13b628e10c5a1d26b520d484f68f2e017bdc08cd3260c28a97f73a6a0e52b6"} Dec 09 11:48:23 crc kubenswrapper[4849]: I1209 11:48:23.509857 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-618a-account-create-update-j7swh" event={"ID":"7cfdbceb-1c2b-482d-b331-06c839bee145","Type":"ContainerStarted","Data":"192aa2c48b7300d46520ca6c46bb2518c62b599d08498d19a01080bd287cdd3d"} Dec 09 11:48:23 crc kubenswrapper[4849]: I1209 11:48:23.512910 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-vf9pp" podStartSLOduration=2.512888437 podStartE2EDuration="2.512888437s" podCreationTimestamp="2025-12-09 11:48:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:48:23.50298022 +0000 UTC m=+1286.042864546" watchObservedRunningTime="2025-12-09 11:48:23.512888437 +0000 UTC m=+1286.052772753" Dec 09 11:48:23 crc kubenswrapper[4849]: I1209 11:48:23.552883 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-618a-account-create-update-j7swh" podStartSLOduration=2.552854924 podStartE2EDuration="2.552854924s" podCreationTimestamp="2025-12-09 11:48:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:48:23.544609629 +0000 UTC m=+1286.084493945" watchObservedRunningTime="2025-12-09 11:48:23.552854924 +0000 UTC m=+1286.092739250" Dec 09 11:48:24 crc kubenswrapper[4849]: I1209 11:48:24.518172 4849 generic.go:334] "Generic (PLEG): container finished" podID="2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc" containerID="f072917b3f014061fe718fbe99e09e6813185b1c406a8b2f83eac9d4fa8dc52c" exitCode=0 Dec 09 11:48:24 crc kubenswrapper[4849]: I1209 11:48:24.518234 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bkvhb" event={"ID":"2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc","Type":"ContainerDied","Data":"f072917b3f014061fe718fbe99e09e6813185b1c406a8b2f83eac9d4fa8dc52c"} Dec 09 11:48:24 crc kubenswrapper[4849]: I1209 11:48:24.519606 4849 generic.go:334] "Generic (PLEG): container finished" podID="7cfdbceb-1c2b-482d-b331-06c839bee145" containerID="b9572446e566ec8d37d8f67b8456a5ddc62d4fb20e747ed110f4fa2c5f0705d3" exitCode=0 Dec 09 11:48:24 crc kubenswrapper[4849]: I1209 11:48:24.519716 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-618a-account-create-update-j7swh" event={"ID":"7cfdbceb-1c2b-482d-b331-06c839bee145","Type":"ContainerDied","Data":"b9572446e566ec8d37d8f67b8456a5ddc62d4fb20e747ed110f4fa2c5f0705d3"} Dec 09 11:48:24 crc kubenswrapper[4849]: I1209 11:48:24.521374 4849 generic.go:334] "Generic (PLEG): container finished" podID="f0b7c503-fd52-4bc9-88f0-a37f6346916d" containerID="c0a0105bbc82ccab06d746d48a023c556ea2cf970f5890a44f0063a6bbd35976" exitCode=0 Dec 09 11:48:24 crc kubenswrapper[4849]: I1209 11:48:24.521620 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3150-account-create-update-zd6b9" event={"ID":"f0b7c503-fd52-4bc9-88f0-a37f6346916d","Type":"ContainerDied","Data":"c0a0105bbc82ccab06d746d48a023c556ea2cf970f5890a44f0063a6bbd35976"} Dec 09 11:48:24 crc kubenswrapper[4849]: I1209 11:48:24.522951 4849 generic.go:334] "Generic (PLEG): container finished" podID="6c7247c3-8d63-4753-969f-dcdb4eea86d1" containerID="0e6155a514ff9498bc70089ea56f264c6c2822bbb855cf7be274cc45e371b6a3" exitCode=0 Dec 09 11:48:24 crc kubenswrapper[4849]: I1209 11:48:24.523007 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vf9pp" event={"ID":"6c7247c3-8d63-4753-969f-dcdb4eea86d1","Type":"ContainerDied","Data":"0e6155a514ff9498bc70089ea56f264c6c2822bbb855cf7be274cc45e371b6a3"} Dec 09 11:48:24 crc kubenswrapper[4849]: I1209 11:48:24.524174 4849 generic.go:334] "Generic (PLEG): container finished" podID="f696cb20-6d6a-467c-9963-4ea7bd4bb894" containerID="d582ebb6e31921bcf131f4b05a33be85e870df2f817384f1420d0f473af54344" exitCode=0 Dec 09 11:48:24 crc kubenswrapper[4849]: I1209 11:48:24.524256 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5458-account-create-update-cn6mz" event={"ID":"f696cb20-6d6a-467c-9963-4ea7bd4bb894","Type":"ContainerDied","Data":"d582ebb6e31921bcf131f4b05a33be85e870df2f817384f1420d0f473af54344"} Dec 09 11:48:24 crc kubenswrapper[4849]: I1209 11:48:24.525889 4849 generic.go:334] "Generic (PLEG): container finished" podID="73e98d76-209b-48ef-bb19-c996e1fd5fbb" containerID="7d34ef63bef99a54b16c4c8d3760fc06a08a6ee985371be78f3993b905e05fd3" exitCode=0 Dec 09 11:48:24 crc kubenswrapper[4849]: I1209 11:48:24.525925 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8gtvk" event={"ID":"73e98d76-209b-48ef-bb19-c996e1fd5fbb","Type":"ContainerDied","Data":"7d34ef63bef99a54b16c4c8d3760fc06a08a6ee985371be78f3993b905e05fd3"} Dec 09 11:48:25 crc kubenswrapper[4849]: I1209 11:48:25.994004 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3150-account-create-update-zd6b9" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.134279 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl6zs\" (UniqueName: \"kubernetes.io/projected/f0b7c503-fd52-4bc9-88f0-a37f6346916d-kube-api-access-gl6zs\") pod \"f0b7c503-fd52-4bc9-88f0-a37f6346916d\" (UID: \"f0b7c503-fd52-4bc9-88f0-a37f6346916d\") " Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.134503 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0b7c503-fd52-4bc9-88f0-a37f6346916d-operator-scripts\") pod \"f0b7c503-fd52-4bc9-88f0-a37f6346916d\" (UID: \"f0b7c503-fd52-4bc9-88f0-a37f6346916d\") " Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.135100 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b7c503-fd52-4bc9-88f0-a37f6346916d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0b7c503-fd52-4bc9-88f0-a37f6346916d" (UID: "f0b7c503-fd52-4bc9-88f0-a37f6346916d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.143955 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0b7c503-fd52-4bc9-88f0-a37f6346916d-kube-api-access-gl6zs" (OuterVolumeSpecName: "kube-api-access-gl6zs") pod "f0b7c503-fd52-4bc9-88f0-a37f6346916d" (UID: "f0b7c503-fd52-4bc9-88f0-a37f6346916d"). InnerVolumeSpecName "kube-api-access-gl6zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.236251 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl6zs\" (UniqueName: \"kubernetes.io/projected/f0b7c503-fd52-4bc9-88f0-a37f6346916d-kube-api-access-gl6zs\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.236285 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0b7c503-fd52-4bc9-88f0-a37f6346916d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.242181 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8gtvk" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.253939 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vf9pp" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.255831 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bkvhb" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.264017 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-618a-account-create-update-j7swh" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.270368 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5458-account-create-update-cn6mz" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.337436 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4dvh\" (UniqueName: \"kubernetes.io/projected/7cfdbceb-1c2b-482d-b331-06c839bee145-kube-api-access-g4dvh\") pod \"7cfdbceb-1c2b-482d-b331-06c839bee145\" (UID: \"7cfdbceb-1c2b-482d-b331-06c839bee145\") " Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.337500 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc-operator-scripts\") pod \"2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc\" (UID: \"2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc\") " Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.337549 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cfdbceb-1c2b-482d-b331-06c839bee145-operator-scripts\") pod \"7cfdbceb-1c2b-482d-b331-06c839bee145\" (UID: \"7cfdbceb-1c2b-482d-b331-06c839bee145\") " Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.337574 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddxg2\" (UniqueName: \"kubernetes.io/projected/6c7247c3-8d63-4753-969f-dcdb4eea86d1-kube-api-access-ddxg2\") pod \"6c7247c3-8d63-4753-969f-dcdb4eea86d1\" (UID: \"6c7247c3-8d63-4753-969f-dcdb4eea86d1\") " Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.337607 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73e98d76-209b-48ef-bb19-c996e1fd5fbb-operator-scripts\") pod \"73e98d76-209b-48ef-bb19-c996e1fd5fbb\" (UID: \"73e98d76-209b-48ef-bb19-c996e1fd5fbb\") " Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.337690 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmvxs\" (UniqueName: \"kubernetes.io/projected/2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc-kube-api-access-kmvxs\") pod \"2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc\" (UID: \"2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc\") " Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.337756 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4fh7\" (UniqueName: \"kubernetes.io/projected/73e98d76-209b-48ef-bb19-c996e1fd5fbb-kube-api-access-m4fh7\") pod \"73e98d76-209b-48ef-bb19-c996e1fd5fbb\" (UID: \"73e98d76-209b-48ef-bb19-c996e1fd5fbb\") " Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.337837 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c7247c3-8d63-4753-969f-dcdb4eea86d1-operator-scripts\") pod \"6c7247c3-8d63-4753-969f-dcdb4eea86d1\" (UID: \"6c7247c3-8d63-4753-969f-dcdb4eea86d1\") " Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.338628 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c7247c3-8d63-4753-969f-dcdb4eea86d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c7247c3-8d63-4753-969f-dcdb4eea86d1" (UID: "6c7247c3-8d63-4753-969f-dcdb4eea86d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.338965 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73e98d76-209b-48ef-bb19-c996e1fd5fbb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73e98d76-209b-48ef-bb19-c996e1fd5fbb" (UID: "73e98d76-209b-48ef-bb19-c996e1fd5fbb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.341053 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cfdbceb-1c2b-482d-b331-06c839bee145-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7cfdbceb-1c2b-482d-b331-06c839bee145" (UID: "7cfdbceb-1c2b-482d-b331-06c839bee145"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.341101 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc" (UID: "2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.344272 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc-kube-api-access-kmvxs" (OuterVolumeSpecName: "kube-api-access-kmvxs") pod "2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc" (UID: "2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc"). InnerVolumeSpecName "kube-api-access-kmvxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.344300 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7247c3-8d63-4753-969f-dcdb4eea86d1-kube-api-access-ddxg2" (OuterVolumeSpecName: "kube-api-access-ddxg2") pod "6c7247c3-8d63-4753-969f-dcdb4eea86d1" (UID: "6c7247c3-8d63-4753-969f-dcdb4eea86d1"). InnerVolumeSpecName "kube-api-access-ddxg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.345537 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cfdbceb-1c2b-482d-b331-06c839bee145-kube-api-access-g4dvh" (OuterVolumeSpecName: "kube-api-access-g4dvh") pod "7cfdbceb-1c2b-482d-b331-06c839bee145" (UID: "7cfdbceb-1c2b-482d-b331-06c839bee145"). InnerVolumeSpecName "kube-api-access-g4dvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.367882 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e98d76-209b-48ef-bb19-c996e1fd5fbb-kube-api-access-m4fh7" (OuterVolumeSpecName: "kube-api-access-m4fh7") pod "73e98d76-209b-48ef-bb19-c996e1fd5fbb" (UID: "73e98d76-209b-48ef-bb19-c996e1fd5fbb"). InnerVolumeSpecName "kube-api-access-m4fh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.439902 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb7q2\" (UniqueName: \"kubernetes.io/projected/f696cb20-6d6a-467c-9963-4ea7bd4bb894-kube-api-access-xb7q2\") pod \"f696cb20-6d6a-467c-9963-4ea7bd4bb894\" (UID: \"f696cb20-6d6a-467c-9963-4ea7bd4bb894\") " Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.439969 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f696cb20-6d6a-467c-9963-4ea7bd4bb894-operator-scripts\") pod \"f696cb20-6d6a-467c-9963-4ea7bd4bb894\" (UID: \"f696cb20-6d6a-467c-9963-4ea7bd4bb894\") " Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.440321 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73e98d76-209b-48ef-bb19-c996e1fd5fbb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.440362 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmvxs\" (UniqueName: \"kubernetes.io/projected/2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc-kube-api-access-kmvxs\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.440373 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4fh7\" (UniqueName: \"kubernetes.io/projected/73e98d76-209b-48ef-bb19-c996e1fd5fbb-kube-api-access-m4fh7\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.440382 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c7247c3-8d63-4753-969f-dcdb4eea86d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.440391 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4dvh\" (UniqueName: \"kubernetes.io/projected/7cfdbceb-1c2b-482d-b331-06c839bee145-kube-api-access-g4dvh\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.440399 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.440449 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cfdbceb-1c2b-482d-b331-06c839bee145-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.440461 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddxg2\" (UniqueName: \"kubernetes.io/projected/6c7247c3-8d63-4753-969f-dcdb4eea86d1-kube-api-access-ddxg2\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.440884 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f696cb20-6d6a-467c-9963-4ea7bd4bb894-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f696cb20-6d6a-467c-9963-4ea7bd4bb894" (UID: "f696cb20-6d6a-467c-9963-4ea7bd4bb894"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.443031 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f696cb20-6d6a-467c-9963-4ea7bd4bb894-kube-api-access-xb7q2" (OuterVolumeSpecName: "kube-api-access-xb7q2") pod "f696cb20-6d6a-467c-9963-4ea7bd4bb894" (UID: "f696cb20-6d6a-467c-9963-4ea7bd4bb894"). InnerVolumeSpecName "kube-api-access-xb7q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.542310 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb7q2\" (UniqueName: \"kubernetes.io/projected/f696cb20-6d6a-467c-9963-4ea7bd4bb894-kube-api-access-xb7q2\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.542346 4849 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f696cb20-6d6a-467c-9963-4ea7bd4bb894-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.549082 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5458-account-create-update-cn6mz" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.550387 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8gtvk" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.553099 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bkvhb" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.563185 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5458-account-create-update-cn6mz" event={"ID":"f696cb20-6d6a-467c-9963-4ea7bd4bb894","Type":"ContainerDied","Data":"1793a62f36db79e286e843cf05da31a21a4c72a7415bab2e747abd190daf5f6f"} Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.563228 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1793a62f36db79e286e843cf05da31a21a4c72a7415bab2e747abd190daf5f6f" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.563244 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8gtvk" event={"ID":"73e98d76-209b-48ef-bb19-c996e1fd5fbb","Type":"ContainerDied","Data":"8e9cb1ca53792e061a66142b849b326d821bb5978dde3d3c9dcb6a9238a3b2b3"} Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.563254 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e9cb1ca53792e061a66142b849b326d821bb5978dde3d3c9dcb6a9238a3b2b3" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.563262 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bkvhb" event={"ID":"2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc","Type":"ContainerDied","Data":"6e13b628e10c5a1d26b520d484f68f2e017bdc08cd3260c28a97f73a6a0e52b6"} Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.563272 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e13b628e10c5a1d26b520d484f68f2e017bdc08cd3260c28a97f73a6a0e52b6" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.564767 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-618a-account-create-update-j7swh" event={"ID":"7cfdbceb-1c2b-482d-b331-06c839bee145","Type":"ContainerDied","Data":"192aa2c48b7300d46520ca6c46bb2518c62b599d08498d19a01080bd287cdd3d"} Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.564871 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="192aa2c48b7300d46520ca6c46bb2518c62b599d08498d19a01080bd287cdd3d" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.564972 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-618a-account-create-update-j7swh" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.579877 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3150-account-create-update-zd6b9" event={"ID":"f0b7c503-fd52-4bc9-88f0-a37f6346916d","Type":"ContainerDied","Data":"55dfef7eaeaa131d9933ae3a35aef0f5c5446a2fd9aabf2e88b0e60ed5f9478d"} Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.579927 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55dfef7eaeaa131d9933ae3a35aef0f5c5446a2fd9aabf2e88b0e60ed5f9478d" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.580115 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3150-account-create-update-zd6b9" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.588985 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vf9pp" event={"ID":"6c7247c3-8d63-4753-969f-dcdb4eea86d1","Type":"ContainerDied","Data":"1e4e196f1e3bf5fe6b92c248a6d9864a8e6ac2eb1130679a7973fb2bc620bfd1"} Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.589026 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e4e196f1e3bf5fe6b92c248a6d9864a8e6ac2eb1130679a7973fb2bc620bfd1" Dec 09 11:48:26 crc kubenswrapper[4849]: I1209 11:48:26.589169 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vf9pp" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.888835 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-654nd"] Dec 09 11:48:31 crc kubenswrapper[4849]: E1209 11:48:31.889668 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7247c3-8d63-4753-969f-dcdb4eea86d1" containerName="mariadb-database-create" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.889689 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7247c3-8d63-4753-969f-dcdb4eea86d1" containerName="mariadb-database-create" Dec 09 11:48:31 crc kubenswrapper[4849]: E1209 11:48:31.889703 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f696cb20-6d6a-467c-9963-4ea7bd4bb894" containerName="mariadb-account-create-update" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.889712 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f696cb20-6d6a-467c-9963-4ea7bd4bb894" containerName="mariadb-account-create-update" Dec 09 11:48:31 crc kubenswrapper[4849]: E1209 11:48:31.889734 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e98d76-209b-48ef-bb19-c996e1fd5fbb" containerName="mariadb-database-create" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.889745 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e98d76-209b-48ef-bb19-c996e1fd5fbb" containerName="mariadb-database-create" Dec 09 11:48:31 crc kubenswrapper[4849]: E1209 11:48:31.889763 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc" containerName="mariadb-database-create" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.889771 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc" containerName="mariadb-database-create" Dec 09 11:48:31 crc kubenswrapper[4849]: E1209 11:48:31.889787 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cfdbceb-1c2b-482d-b331-06c839bee145" containerName="mariadb-account-create-update" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.889794 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cfdbceb-1c2b-482d-b331-06c839bee145" containerName="mariadb-account-create-update" Dec 09 11:48:31 crc kubenswrapper[4849]: E1209 11:48:31.889805 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b7c503-fd52-4bc9-88f0-a37f6346916d" containerName="mariadb-account-create-update" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.889813 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b7c503-fd52-4bc9-88f0-a37f6346916d" containerName="mariadb-account-create-update" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.890014 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc" containerName="mariadb-database-create" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.890031 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cfdbceb-1c2b-482d-b331-06c839bee145" containerName="mariadb-account-create-update" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.890043 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e98d76-209b-48ef-bb19-c996e1fd5fbb" containerName="mariadb-database-create" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.890055 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f696cb20-6d6a-467c-9963-4ea7bd4bb894" containerName="mariadb-account-create-update" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.890067 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b7c503-fd52-4bc9-88f0-a37f6346916d" containerName="mariadb-account-create-update" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.890083 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c7247c3-8d63-4753-969f-dcdb4eea86d1" containerName="mariadb-database-create" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.890802 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.894677 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.895501 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.898012 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mgncd" Dec 09 11:48:31 crc kubenswrapper[4849]: I1209 11:48:31.930389 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-654nd"] Dec 09 11:48:32 crc kubenswrapper[4849]: I1209 11:48:32.033261 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-scripts\") pod \"nova-cell0-conductor-db-sync-654nd\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:32 crc kubenswrapper[4849]: I1209 11:48:32.033331 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-654nd\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:32 crc kubenswrapper[4849]: I1209 11:48:32.033505 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-config-data\") pod \"nova-cell0-conductor-db-sync-654nd\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:32 crc kubenswrapper[4849]: I1209 11:48:32.033574 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsj5s\" (UniqueName: \"kubernetes.io/projected/24f3ea3b-6680-4bc0-a6af-31fc894664ca-kube-api-access-fsj5s\") pod \"nova-cell0-conductor-db-sync-654nd\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:32 crc kubenswrapper[4849]: I1209 11:48:32.135178 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-config-data\") pod \"nova-cell0-conductor-db-sync-654nd\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:32 crc kubenswrapper[4849]: I1209 11:48:32.135483 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsj5s\" (UniqueName: \"kubernetes.io/projected/24f3ea3b-6680-4bc0-a6af-31fc894664ca-kube-api-access-fsj5s\") pod \"nova-cell0-conductor-db-sync-654nd\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:32 crc kubenswrapper[4849]: I1209 11:48:32.135587 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-scripts\") pod \"nova-cell0-conductor-db-sync-654nd\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:32 crc kubenswrapper[4849]: I1209 11:48:32.135621 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-654nd\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:32 crc kubenswrapper[4849]: I1209 11:48:32.160331 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-config-data\") pod \"nova-cell0-conductor-db-sync-654nd\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:32 crc kubenswrapper[4849]: I1209 11:48:32.161220 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-654nd\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:32 crc kubenswrapper[4849]: I1209 11:48:32.183342 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-scripts\") pod \"nova-cell0-conductor-db-sync-654nd\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:32 crc kubenswrapper[4849]: I1209 11:48:32.193096 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsj5s\" (UniqueName: \"kubernetes.io/projected/24f3ea3b-6680-4bc0-a6af-31fc894664ca-kube-api-access-fsj5s\") pod \"nova-cell0-conductor-db-sync-654nd\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:32 crc kubenswrapper[4849]: I1209 11:48:32.209744 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:32 crc kubenswrapper[4849]: I1209 11:48:32.720923 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-654nd"] Dec 09 11:48:32 crc kubenswrapper[4849]: I1209 11:48:32.732966 4849 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 11:48:33 crc kubenswrapper[4849]: I1209 11:48:33.689749 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-654nd" event={"ID":"24f3ea3b-6680-4bc0-a6af-31fc894664ca","Type":"ContainerStarted","Data":"21553e9c0182ee1894039a3bbd322236ba044138516f89c3109f5068fed1c450"} Dec 09 11:48:35 crc kubenswrapper[4849]: I1209 11:48:35.666823 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 11:48:41 crc kubenswrapper[4849]: I1209 11:48:41.781964 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-654nd" event={"ID":"24f3ea3b-6680-4bc0-a6af-31fc894664ca","Type":"ContainerStarted","Data":"fefbca211338881f99160d122884540c7dae45725114e94225498b6661dcdbe4"} Dec 09 11:48:41 crc kubenswrapper[4849]: I1209 11:48:41.787792 4849 generic.go:334] "Generic (PLEG): container finished" podID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerID="ccfba7a1aca585ebde17a94b5316d040cbfaf87c9ee2b2201bca575fd55ce801" exitCode=137 Dec 09 11:48:41 crc kubenswrapper[4849]: I1209 11:48:41.787836 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537","Type":"ContainerDied","Data":"ccfba7a1aca585ebde17a94b5316d040cbfaf87c9ee2b2201bca575fd55ce801"} Dec 09 11:48:41 crc kubenswrapper[4849]: I1209 11:48:41.822100 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-654nd" podStartSLOduration=2.789350077 podStartE2EDuration="10.822078622s" podCreationTimestamp="2025-12-09 11:48:31 +0000 UTC" firstStartedPulling="2025-12-09 11:48:32.732688677 +0000 UTC m=+1295.272572993" lastFinishedPulling="2025-12-09 11:48:40.765417222 +0000 UTC m=+1303.305301538" observedRunningTime="2025-12-09 11:48:41.808991515 +0000 UTC m=+1304.348875831" watchObservedRunningTime="2025-12-09 11:48:41.822078622 +0000 UTC m=+1304.361962938" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.115602 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.137956 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vdpd\" (UniqueName: \"kubernetes.io/projected/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-kube-api-access-6vdpd\") pod \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.138440 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-sg-core-conf-yaml\") pod \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.138556 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-scripts\") pod \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.138727 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-config-data\") pod \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.138888 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-log-httpd\") pod \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.138991 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-combined-ca-bundle\") pod \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.139082 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-run-httpd\") pod \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\" (UID: \"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537\") " Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.139799 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" (UID: "3e6d4b12-c1ef-42d0-902d-7bb35ec0e537"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.140770 4849 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.140990 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" (UID: "3e6d4b12-c1ef-42d0-902d-7bb35ec0e537"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.157882 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-kube-api-access-6vdpd" (OuterVolumeSpecName: "kube-api-access-6vdpd") pod "3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" (UID: "3e6d4b12-c1ef-42d0-902d-7bb35ec0e537"). InnerVolumeSpecName "kube-api-access-6vdpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.173605 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-scripts" (OuterVolumeSpecName: "scripts") pod "3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" (UID: "3e6d4b12-c1ef-42d0-902d-7bb35ec0e537"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.219625 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" (UID: "3e6d4b12-c1ef-42d0-902d-7bb35ec0e537"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.242736 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vdpd\" (UniqueName: \"kubernetes.io/projected/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-kube-api-access-6vdpd\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.242775 4849 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.242791 4849 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.242802 4849 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.268574 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" (UID: "3e6d4b12-c1ef-42d0-902d-7bb35ec0e537"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.284929 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-config-data" (OuterVolumeSpecName: "config-data") pod "3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" (UID: "3e6d4b12-c1ef-42d0-902d-7bb35ec0e537"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.343982 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.344026 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.799391 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e6d4b12-c1ef-42d0-902d-7bb35ec0e537","Type":"ContainerDied","Data":"3bc7ff0026ea94d171e101f02a8e4372447240d1db9c273477312d3926c99a67"} Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.799452 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.799803 4849 scope.go:117] "RemoveContainer" containerID="ccfba7a1aca585ebde17a94b5316d040cbfaf87c9ee2b2201bca575fd55ce801" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.820256 4849 scope.go:117] "RemoveContainer" containerID="9fd1977b88fb17ff4953c855aef5c1ce16ff8970575e546c7f37d26d8fd3fa18" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.838589 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.845375 4849 scope.go:117] "RemoveContainer" containerID="3a2caebb8bc9366f18135518bc0a247f015ccaa0d6578aac2ccb50da913ed52b" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.847969 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.868081 4849 scope.go:117] "RemoveContainer" containerID="9679952b5fc6c2ec3a4ed1ff2d932efcc0075c44e81c3b5993ba0b9c06c7460b" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.872989 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:48:42 crc kubenswrapper[4849]: E1209 11:48:42.873375 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="sg-core" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.873394 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="sg-core" Dec 09 11:48:42 crc kubenswrapper[4849]: E1209 11:48:42.873432 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="ceilometer-central-agent" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.873439 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="ceilometer-central-agent" Dec 09 11:48:42 crc kubenswrapper[4849]: E1209 11:48:42.873458 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="proxy-httpd" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.873464 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="proxy-httpd" Dec 09 11:48:42 crc kubenswrapper[4849]: E1209 11:48:42.873476 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="ceilometer-notification-agent" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.873482 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="ceilometer-notification-agent" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.873734 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="ceilometer-notification-agent" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.873749 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="proxy-httpd" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.873765 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="ceilometer-central-agent" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.873774 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" containerName="sg-core" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.875324 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.882662 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.883124 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.911137 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.954016 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.954071 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-config-data\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.954126 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld6wx\" (UniqueName: \"kubernetes.io/projected/695f3a1c-1152-43f6-b3ac-d1f79588d45d-kube-api-access-ld6wx\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.954183 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f3a1c-1152-43f6-b3ac-d1f79588d45d-log-httpd\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.954208 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.954231 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-scripts\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:42 crc kubenswrapper[4849]: I1209 11:48:42.954360 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f3a1c-1152-43f6-b3ac-d1f79588d45d-run-httpd\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.056374 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f3a1c-1152-43f6-b3ac-d1f79588d45d-log-httpd\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.056475 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.056509 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-scripts\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.056561 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f3a1c-1152-43f6-b3ac-d1f79588d45d-run-httpd\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.056620 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.056645 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-config-data\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.056694 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld6wx\" (UniqueName: \"kubernetes.io/projected/695f3a1c-1152-43f6-b3ac-d1f79588d45d-kube-api-access-ld6wx\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.057145 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f3a1c-1152-43f6-b3ac-d1f79588d45d-run-httpd\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.057802 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f3a1c-1152-43f6-b3ac-d1f79588d45d-log-httpd\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.061039 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-scripts\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.063060 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.063338 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.067198 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-config-data\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.080156 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld6wx\" (UniqueName: \"kubernetes.io/projected/695f3a1c-1152-43f6-b3ac-d1f79588d45d-kube-api-access-ld6wx\") pod \"ceilometer-0\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.193211 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:48:43 crc kubenswrapper[4849]: I1209 11:48:43.804800 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:48:43 crc kubenswrapper[4849]: W1209 11:48:43.824077 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod695f3a1c_1152_43f6_b3ac_d1f79588d45d.slice/crio-1a2c1ae56dd039023c6a24232ac3623bec442d80f9b7c331841b7cebc9596509 WatchSource:0}: Error finding container 1a2c1ae56dd039023c6a24232ac3623bec442d80f9b7c331841b7cebc9596509: Status 404 returned error can't find the container with id 1a2c1ae56dd039023c6a24232ac3623bec442d80f9b7c331841b7cebc9596509 Dec 09 11:48:44 crc kubenswrapper[4849]: I1209 11:48:44.547198 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e6d4b12-c1ef-42d0-902d-7bb35ec0e537" path="/var/lib/kubelet/pods/3e6d4b12-c1ef-42d0-902d-7bb35ec0e537/volumes" Dec 09 11:48:44 crc kubenswrapper[4849]: I1209 11:48:44.826950 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f3a1c-1152-43f6-b3ac-d1f79588d45d","Type":"ContainerStarted","Data":"fffad01abbeaf2d78fdb855b92961167a0fd5a7fe766e5b6f28bb0960de84984"} Dec 09 11:48:44 crc kubenswrapper[4849]: I1209 11:48:44.827286 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f3a1c-1152-43f6-b3ac-d1f79588d45d","Type":"ContainerStarted","Data":"1a2c1ae56dd039023c6a24232ac3623bec442d80f9b7c331841b7cebc9596509"} Dec 09 11:48:45 crc kubenswrapper[4849]: I1209 11:48:45.854469 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f3a1c-1152-43f6-b3ac-d1f79588d45d","Type":"ContainerStarted","Data":"4f4c4374de258dc43249b24f8318ae3f1a45a2c5120b596a356baad874fcaa8e"} Dec 09 11:48:46 crc kubenswrapper[4849]: I1209 11:48:46.865954 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f3a1c-1152-43f6-b3ac-d1f79588d45d","Type":"ContainerStarted","Data":"f2696f4e8fad7d3ade1d468af3bb0a37861808ffab3b09ec3ce196d54fdac99b"} Dec 09 11:48:47 crc kubenswrapper[4849]: I1209 11:48:47.876298 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f3a1c-1152-43f6-b3ac-d1f79588d45d","Type":"ContainerStarted","Data":"647a1aa88e4165d52ec84fd16b09332e66637e1c248fbcc1709e1bbe2b857513"} Dec 09 11:48:47 crc kubenswrapper[4849]: I1209 11:48:47.877544 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 11:48:47 crc kubenswrapper[4849]: I1209 11:48:47.896078 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.755747945 podStartE2EDuration="5.896047854s" podCreationTimestamp="2025-12-09 11:48:42 +0000 UTC" firstStartedPulling="2025-12-09 11:48:43.826845842 +0000 UTC m=+1306.366730158" lastFinishedPulling="2025-12-09 11:48:46.967145741 +0000 UTC m=+1309.507030067" observedRunningTime="2025-12-09 11:48:47.894363301 +0000 UTC m=+1310.434247617" watchObservedRunningTime="2025-12-09 11:48:47.896047854 +0000 UTC m=+1310.435932160" Dec 09 11:48:51 crc kubenswrapper[4849]: I1209 11:48:51.132339 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:48:51 crc kubenswrapper[4849]: I1209 11:48:51.132997 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:48:51 crc kubenswrapper[4849]: I1209 11:48:51.133081 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:48:51 crc kubenswrapper[4849]: I1209 11:48:51.134197 4849 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5d0f54890d644c510ae563a6b696f7236880271ca1fac58424d719b2dbb5e99"} pod="openshift-machine-config-operator/machine-config-daemon-89kpx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 11:48:51 crc kubenswrapper[4849]: I1209 11:48:51.134269 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" containerID="cri-o://b5d0f54890d644c510ae563a6b696f7236880271ca1fac58424d719b2dbb5e99" gracePeriod=600 Dec 09 11:48:51 crc kubenswrapper[4849]: E1209 11:48:51.264133 4849 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod157c6f6c_042b_4da3_934e_a08474e56486.slice/crio-b5d0f54890d644c510ae563a6b696f7236880271ca1fac58424d719b2dbb5e99.scope\": RecentStats: unable to find data in memory cache]" Dec 09 11:48:51 crc kubenswrapper[4849]: I1209 11:48:51.962779 4849 generic.go:334] "Generic (PLEG): container finished" podID="157c6f6c-042b-4da3-934e-a08474e56486" containerID="b5d0f54890d644c510ae563a6b696f7236880271ca1fac58424d719b2dbb5e99" exitCode=0 Dec 09 11:48:51 crc kubenswrapper[4849]: I1209 11:48:51.963062 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerDied","Data":"b5d0f54890d644c510ae563a6b696f7236880271ca1fac58424d719b2dbb5e99"} Dec 09 11:48:51 crc kubenswrapper[4849]: I1209 11:48:51.963096 4849 scope.go:117] "RemoveContainer" containerID="0a2af74fde05e47664890560ba0230403bcc6a0b200101e65907871ade0b4a58" Dec 09 11:48:52 crc kubenswrapper[4849]: I1209 11:48:52.974890 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerStarted","Data":"49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7"} Dec 09 11:48:57 crc kubenswrapper[4849]: I1209 11:48:57.014644 4849 generic.go:334] "Generic (PLEG): container finished" podID="24f3ea3b-6680-4bc0-a6af-31fc894664ca" containerID="fefbca211338881f99160d122884540c7dae45725114e94225498b6661dcdbe4" exitCode=0 Dec 09 11:48:57 crc kubenswrapper[4849]: I1209 11:48:57.014733 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-654nd" event={"ID":"24f3ea3b-6680-4bc0-a6af-31fc894664ca","Type":"ContainerDied","Data":"fefbca211338881f99160d122884540c7dae45725114e94225498b6661dcdbe4"} Dec 09 11:48:58 crc kubenswrapper[4849]: I1209 11:48:58.401721 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:58 crc kubenswrapper[4849]: I1209 11:48:58.465336 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-config-data\") pod \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " Dec 09 11:48:58 crc kubenswrapper[4849]: I1209 11:48:58.465454 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-scripts\") pod \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " Dec 09 11:48:58 crc kubenswrapper[4849]: I1209 11:48:58.465487 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsj5s\" (UniqueName: \"kubernetes.io/projected/24f3ea3b-6680-4bc0-a6af-31fc894664ca-kube-api-access-fsj5s\") pod \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " Dec 09 11:48:58 crc kubenswrapper[4849]: I1209 11:48:58.465528 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-combined-ca-bundle\") pod \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\" (UID: \"24f3ea3b-6680-4bc0-a6af-31fc894664ca\") " Dec 09 11:48:58 crc kubenswrapper[4849]: I1209 11:48:58.470810 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24f3ea3b-6680-4bc0-a6af-31fc894664ca-kube-api-access-fsj5s" (OuterVolumeSpecName: "kube-api-access-fsj5s") pod "24f3ea3b-6680-4bc0-a6af-31fc894664ca" (UID: "24f3ea3b-6680-4bc0-a6af-31fc894664ca"). InnerVolumeSpecName "kube-api-access-fsj5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:48:58 crc kubenswrapper[4849]: I1209 11:48:58.473671 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-scripts" (OuterVolumeSpecName: "scripts") pod "24f3ea3b-6680-4bc0-a6af-31fc894664ca" (UID: "24f3ea3b-6680-4bc0-a6af-31fc894664ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:48:58 crc kubenswrapper[4849]: I1209 11:48:58.498618 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24f3ea3b-6680-4bc0-a6af-31fc894664ca" (UID: "24f3ea3b-6680-4bc0-a6af-31fc894664ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:48:58 crc kubenswrapper[4849]: I1209 11:48:58.507339 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-config-data" (OuterVolumeSpecName: "config-data") pod "24f3ea3b-6680-4bc0-a6af-31fc894664ca" (UID: "24f3ea3b-6680-4bc0-a6af-31fc894664ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:48:58 crc kubenswrapper[4849]: I1209 11:48:58.567841 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:58 crc kubenswrapper[4849]: I1209 11:48:58.568170 4849 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:58 crc kubenswrapper[4849]: I1209 11:48:58.568265 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsj5s\" (UniqueName: \"kubernetes.io/projected/24f3ea3b-6680-4bc0-a6af-31fc894664ca-kube-api-access-fsj5s\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:58 crc kubenswrapper[4849]: I1209 11:48:58.568342 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24f3ea3b-6680-4bc0-a6af-31fc894664ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.034776 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-654nd" event={"ID":"24f3ea3b-6680-4bc0-a6af-31fc894664ca","Type":"ContainerDied","Data":"21553e9c0182ee1894039a3bbd322236ba044138516f89c3109f5068fed1c450"} Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.034825 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21553e9c0182ee1894039a3bbd322236ba044138516f89c3109f5068fed1c450" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.034844 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-654nd" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.151447 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 11:48:59 crc kubenswrapper[4849]: E1209 11:48:59.151845 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f3ea3b-6680-4bc0-a6af-31fc894664ca" containerName="nova-cell0-conductor-db-sync" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.151872 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f3ea3b-6680-4bc0-a6af-31fc894664ca" containerName="nova-cell0-conductor-db-sync" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.152056 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f3ea3b-6680-4bc0-a6af-31fc894664ca" containerName="nova-cell0-conductor-db-sync" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.152732 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.155230 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.164891 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.178097 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460a5c7d-f40b-44eb-a861-b5b12c72d128-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"460a5c7d-f40b-44eb-a861-b5b12c72d128\") " pod="openstack/nova-cell0-conductor-0" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.178220 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460a5c7d-f40b-44eb-a861-b5b12c72d128-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"460a5c7d-f40b-44eb-a861-b5b12c72d128\") " pod="openstack/nova-cell0-conductor-0" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.178264 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trpdk\" (UniqueName: \"kubernetes.io/projected/460a5c7d-f40b-44eb-a861-b5b12c72d128-kube-api-access-trpdk\") pod \"nova-cell0-conductor-0\" (UID: \"460a5c7d-f40b-44eb-a861-b5b12c72d128\") " pod="openstack/nova-cell0-conductor-0" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.186991 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mgncd" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.279540 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460a5c7d-f40b-44eb-a861-b5b12c72d128-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"460a5c7d-f40b-44eb-a861-b5b12c72d128\") " pod="openstack/nova-cell0-conductor-0" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.279650 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460a5c7d-f40b-44eb-a861-b5b12c72d128-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"460a5c7d-f40b-44eb-a861-b5b12c72d128\") " pod="openstack/nova-cell0-conductor-0" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.279683 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trpdk\" (UniqueName: \"kubernetes.io/projected/460a5c7d-f40b-44eb-a861-b5b12c72d128-kube-api-access-trpdk\") pod \"nova-cell0-conductor-0\" (UID: \"460a5c7d-f40b-44eb-a861-b5b12c72d128\") " pod="openstack/nova-cell0-conductor-0" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.285314 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460a5c7d-f40b-44eb-a861-b5b12c72d128-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"460a5c7d-f40b-44eb-a861-b5b12c72d128\") " pod="openstack/nova-cell0-conductor-0" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.286794 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460a5c7d-f40b-44eb-a861-b5b12c72d128-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"460a5c7d-f40b-44eb-a861-b5b12c72d128\") " pod="openstack/nova-cell0-conductor-0" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.296819 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trpdk\" (UniqueName: \"kubernetes.io/projected/460a5c7d-f40b-44eb-a861-b5b12c72d128-kube-api-access-trpdk\") pod \"nova-cell0-conductor-0\" (UID: \"460a5c7d-f40b-44eb-a861-b5b12c72d128\") " pod="openstack/nova-cell0-conductor-0" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.501300 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 09 11:48:59 crc kubenswrapper[4849]: I1209 11:48:59.951007 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 11:49:00 crc kubenswrapper[4849]: I1209 11:49:00.054621 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"460a5c7d-f40b-44eb-a861-b5b12c72d128","Type":"ContainerStarted","Data":"e5172ec77016e70083e5c23b960e59bbec817e493eef991f55c53610f78fb7a2"} Dec 09 11:49:01 crc kubenswrapper[4849]: I1209 11:49:01.064092 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"460a5c7d-f40b-44eb-a861-b5b12c72d128","Type":"ContainerStarted","Data":"10b22197bcedea430709b1542f92698ca35ae05b2dd9ea79595990865bca4041"} Dec 09 11:49:01 crc kubenswrapper[4849]: I1209 11:49:01.065162 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 09 11:49:01 crc kubenswrapper[4849]: I1209 11:49:01.090135 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.090116107 podStartE2EDuration="2.090116107s" podCreationTimestamp="2025-12-09 11:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:49:01.080742743 +0000 UTC m=+1323.620627059" watchObservedRunningTime="2025-12-09 11:49:01.090116107 +0000 UTC m=+1323.630000423" Dec 09 11:49:09 crc kubenswrapper[4849]: I1209 11:49:09.533051 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.011628 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7qvhg"] Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.013993 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.017158 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.017957 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.073334 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7qvhg"] Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.104580 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-scripts\") pod \"nova-cell0-cell-mapping-7qvhg\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.104691 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pgmh\" (UniqueName: \"kubernetes.io/projected/2a968b26-11b2-421b-89bc-d481ce7ebe0a-kube-api-access-7pgmh\") pod \"nova-cell0-cell-mapping-7qvhg\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.104722 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-config-data\") pod \"nova-cell0-cell-mapping-7qvhg\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.104786 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7qvhg\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.208560 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-scripts\") pod \"nova-cell0-cell-mapping-7qvhg\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.208686 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pgmh\" (UniqueName: \"kubernetes.io/projected/2a968b26-11b2-421b-89bc-d481ce7ebe0a-kube-api-access-7pgmh\") pod \"nova-cell0-cell-mapping-7qvhg\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.208715 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-config-data\") pod \"nova-cell0-cell-mapping-7qvhg\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.208793 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7qvhg\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.221694 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-scripts\") pod \"nova-cell0-cell-mapping-7qvhg\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.237186 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7qvhg\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.240536 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.242461 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.243035 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-config-data\") pod \"nova-cell0-cell-mapping-7qvhg\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.251737 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.261265 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pgmh\" (UniqueName: \"kubernetes.io/projected/2a968b26-11b2-421b-89bc-d481ce7ebe0a-kube-api-access-7pgmh\") pod \"nova-cell0-cell-mapping-7qvhg\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.269330 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.350249 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49f8d36e-53c6-4ae6-a088-ee76c48897af-config-data\") pod \"nova-api-0\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " pod="openstack/nova-api-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.350310 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qknkn\" (UniqueName: \"kubernetes.io/projected/49f8d36e-53c6-4ae6-a088-ee76c48897af-kube-api-access-qknkn\") pod \"nova-api-0\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " pod="openstack/nova-api-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.350339 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49f8d36e-53c6-4ae6-a088-ee76c48897af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " pod="openstack/nova-api-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.350356 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49f8d36e-53c6-4ae6-a088-ee76c48897af-logs\") pod \"nova-api-0\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " pod="openstack/nova-api-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.353530 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.355117 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.362384 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.374368 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.383879 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.454447 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qknkn\" (UniqueName: \"kubernetes.io/projected/49f8d36e-53c6-4ae6-a088-ee76c48897af-kube-api-access-qknkn\") pod \"nova-api-0\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " pod="openstack/nova-api-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.454527 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49f8d36e-53c6-4ae6-a088-ee76c48897af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " pod="openstack/nova-api-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.454562 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49f8d36e-53c6-4ae6-a088-ee76c48897af-logs\") pod \"nova-api-0\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " pod="openstack/nova-api-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.454713 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49f8d36e-53c6-4ae6-a088-ee76c48897af-config-data\") pod \"nova-api-0\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " pod="openstack/nova-api-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.456260 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49f8d36e-53c6-4ae6-a088-ee76c48897af-logs\") pod \"nova-api-0\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " pod="openstack/nova-api-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.468363 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49f8d36e-53c6-4ae6-a088-ee76c48897af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " pod="openstack/nova-api-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.471143 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49f8d36e-53c6-4ae6-a088-ee76c48897af-config-data\") pod \"nova-api-0\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " pod="openstack/nova-api-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.496950 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qknkn\" (UniqueName: \"kubernetes.io/projected/49f8d36e-53c6-4ae6-a088-ee76c48897af-kube-api-access-qknkn\") pod \"nova-api-0\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " pod="openstack/nova-api-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.505643 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.521930 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.536649 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.543203 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.558854 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssq9m\" (UniqueName: \"kubernetes.io/projected/9a386901-7188-4596-8fa6-d007406d2bbf-kube-api-access-ssq9m\") pod \"nova-scheduler-0\" (UID: \"9a386901-7188-4596-8fa6-d007406d2bbf\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.558906 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a386901-7188-4596-8fa6-d007406d2bbf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9a386901-7188-4596-8fa6-d007406d2bbf\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.558938 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a386901-7188-4596-8fa6-d007406d2bbf-config-data\") pod \"nova-scheduler-0\" (UID: \"9a386901-7188-4596-8fa6-d007406d2bbf\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.665458 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssq9m\" (UniqueName: \"kubernetes.io/projected/9a386901-7188-4596-8fa6-d007406d2bbf-kube-api-access-ssq9m\") pod \"nova-scheduler-0\" (UID: \"9a386901-7188-4596-8fa6-d007406d2bbf\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.665529 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a386901-7188-4596-8fa6-d007406d2bbf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9a386901-7188-4596-8fa6-d007406d2bbf\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.665557 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/073f7523-bbfd-4875-a17a-f9034464cb01-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"073f7523-bbfd-4875-a17a-f9034464cb01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.665604 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a386901-7188-4596-8fa6-d007406d2bbf-config-data\") pod \"nova-scheduler-0\" (UID: \"9a386901-7188-4596-8fa6-d007406d2bbf\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.665631 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22srv\" (UniqueName: \"kubernetes.io/projected/073f7523-bbfd-4875-a17a-f9034464cb01-kube-api-access-22srv\") pod \"nova-cell1-novncproxy-0\" (UID: \"073f7523-bbfd-4875-a17a-f9034464cb01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.665716 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/073f7523-bbfd-4875-a17a-f9034464cb01-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"073f7523-bbfd-4875-a17a-f9034464cb01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.670518 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a386901-7188-4596-8fa6-d007406d2bbf-config-data\") pod \"nova-scheduler-0\" (UID: \"9a386901-7188-4596-8fa6-d007406d2bbf\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.676625 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.679759 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a386901-7188-4596-8fa6-d007406d2bbf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9a386901-7188-4596-8fa6-d007406d2bbf\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.725982 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssq9m\" (UniqueName: \"kubernetes.io/projected/9a386901-7188-4596-8fa6-d007406d2bbf-kube-api-access-ssq9m\") pod \"nova-scheduler-0\" (UID: \"9a386901-7188-4596-8fa6-d007406d2bbf\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.726702 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.730701 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.742788 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.769386 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/073f7523-bbfd-4875-a17a-f9034464cb01-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"073f7523-bbfd-4875-a17a-f9034464cb01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.769478 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22srv\" (UniqueName: \"kubernetes.io/projected/073f7523-bbfd-4875-a17a-f9034464cb01-kube-api-access-22srv\") pod \"nova-cell1-novncproxy-0\" (UID: \"073f7523-bbfd-4875-a17a-f9034464cb01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.769576 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/073f7523-bbfd-4875-a17a-f9034464cb01-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"073f7523-bbfd-4875-a17a-f9034464cb01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.774520 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/073f7523-bbfd-4875-a17a-f9034464cb01-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"073f7523-bbfd-4875-a17a-f9034464cb01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.783090 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/073f7523-bbfd-4875-a17a-f9034464cb01-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"073f7523-bbfd-4875-a17a-f9034464cb01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.835068 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22srv\" (UniqueName: \"kubernetes.io/projected/073f7523-bbfd-4875-a17a-f9034464cb01-kube-api-access-22srv\") pod \"nova-cell1-novncproxy-0\" (UID: \"073f7523-bbfd-4875-a17a-f9034464cb01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.873577 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12a90b3c-512f-4d27-89a8-b9c714a81952-config-data\") pod \"nova-metadata-0\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " pod="openstack/nova-metadata-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.873632 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx9xq\" (UniqueName: \"kubernetes.io/projected/12a90b3c-512f-4d27-89a8-b9c714a81952-kube-api-access-gx9xq\") pod \"nova-metadata-0\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " pod="openstack/nova-metadata-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.873685 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12a90b3c-512f-4d27-89a8-b9c714a81952-logs\") pod \"nova-metadata-0\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " pod="openstack/nova-metadata-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.873761 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12a90b3c-512f-4d27-89a8-b9c714a81952-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " pod="openstack/nova-metadata-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.905889 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.962559 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.976265 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12a90b3c-512f-4d27-89a8-b9c714a81952-logs\") pod \"nova-metadata-0\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " pod="openstack/nova-metadata-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.976358 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12a90b3c-512f-4d27-89a8-b9c714a81952-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " pod="openstack/nova-metadata-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.976475 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx9xq\" (UniqueName: \"kubernetes.io/projected/12a90b3c-512f-4d27-89a8-b9c714a81952-kube-api-access-gx9xq\") pod \"nova-metadata-0\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " pod="openstack/nova-metadata-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.976576 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12a90b3c-512f-4d27-89a8-b9c714a81952-config-data\") pod \"nova-metadata-0\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " pod="openstack/nova-metadata-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.976825 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12a90b3c-512f-4d27-89a8-b9c714a81952-logs\") pod \"nova-metadata-0\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " pod="openstack/nova-metadata-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.986887 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 11:49:10 crc kubenswrapper[4849]: I1209 11:49:10.992110 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12a90b3c-512f-4d27-89a8-b9c714a81952-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " pod="openstack/nova-metadata-0" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.003104 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12a90b3c-512f-4d27-89a8-b9c714a81952-config-data\") pod \"nova-metadata-0\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " pod="openstack/nova-metadata-0" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.027098 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx9xq\" (UniqueName: \"kubernetes.io/projected/12a90b3c-512f-4d27-89a8-b9c714a81952-kube-api-access-gx9xq\") pod \"nova-metadata-0\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " pod="openstack/nova-metadata-0" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.046717 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-xv67s"] Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.048372 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.103169 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-xv67s"] Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.141247 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.216749 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-xv67s\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.216948 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-config\") pod \"dnsmasq-dns-566b5b7845-xv67s\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.217026 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-dns-svc\") pod \"dnsmasq-dns-566b5b7845-xv67s\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.217161 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-xv67s\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.217182 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zszk\" (UniqueName: \"kubernetes.io/projected/db40c8de-3699-4c66-be24-cc3f9c55bf6d-kube-api-access-2zszk\") pod \"dnsmasq-dns-566b5b7845-xv67s\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.318979 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-config\") pod \"dnsmasq-dns-566b5b7845-xv67s\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.319240 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-dns-svc\") pod \"dnsmasq-dns-566b5b7845-xv67s\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.319298 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-xv67s\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.319314 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zszk\" (UniqueName: \"kubernetes.io/projected/db40c8de-3699-4c66-be24-cc3f9c55bf6d-kube-api-access-2zszk\") pod \"dnsmasq-dns-566b5b7845-xv67s\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.319354 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-xv67s\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.320123 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-config\") pod \"dnsmasq-dns-566b5b7845-xv67s\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.320387 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-dns-svc\") pod \"dnsmasq-dns-566b5b7845-xv67s\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.320884 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-xv67s\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.320907 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-xv67s\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.327311 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7qvhg"] Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.377368 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zszk\" (UniqueName: \"kubernetes.io/projected/db40c8de-3699-4c66-be24-cc3f9c55bf6d-kube-api-access-2zszk\") pod \"dnsmasq-dns-566b5b7845-xv67s\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:11 crc kubenswrapper[4849]: I1209 11:49:11.422726 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.092137 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.095207 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 11:49:12 crc kubenswrapper[4849]: W1209 11:49:12.097794 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49f8d36e_53c6_4ae6_a088_ee76c48897af.slice/crio-fd6982179f96fbd970c1bc7937e5f973e5cf17cec2ac8d2c8532d1d3895982a1 WatchSource:0}: Error finding container fd6982179f96fbd970c1bc7937e5f973e5cf17cec2ac8d2c8532d1d3895982a1: Status 404 returned error can't find the container with id fd6982179f96fbd970c1bc7937e5f973e5cf17cec2ac8d2c8532d1d3895982a1 Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.256453 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7qvhg" event={"ID":"2a968b26-11b2-421b-89bc-d481ce7ebe0a","Type":"ContainerStarted","Data":"5f61ede7d81f986af270ddec4f312688d608480c925b7b5923a61e2788a7c3c5"} Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.256786 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7qvhg" event={"ID":"2a968b26-11b2-421b-89bc-d481ce7ebe0a","Type":"ContainerStarted","Data":"0fd34d09265fbb6fc70002d7a150eec18205ed4dc91623582e77209b2ceba324"} Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.259559 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"49f8d36e-53c6-4ae6-a088-ee76c48897af","Type":"ContainerStarted","Data":"fd6982179f96fbd970c1bc7937e5f973e5cf17cec2ac8d2c8532d1d3895982a1"} Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.261303 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"073f7523-bbfd-4875-a17a-f9034464cb01","Type":"ContainerStarted","Data":"1a64aae5f41aad3a285714c1cfcfb5d30c8be5dc6fc90bcf918e5bcc7ad6923b"} Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.293509 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.309739 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.313953 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7qvhg" podStartSLOduration=3.313936709 podStartE2EDuration="3.313936709s" podCreationTimestamp="2025-12-09 11:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:49:12.312382471 +0000 UTC m=+1334.852266797" watchObservedRunningTime="2025-12-09 11:49:12.313936709 +0000 UTC m=+1334.853821025" Dec 09 11:49:12 crc kubenswrapper[4849]: W1209 11:49:12.315402 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12a90b3c_512f_4d27_89a8_b9c714a81952.slice/crio-00b8623390973c8810d0f6e919281f5db94560c75411d5eb7cd2bd6397378b84 WatchSource:0}: Error finding container 00b8623390973c8810d0f6e919281f5db94560c75411d5eb7cd2bd6397378b84: Status 404 returned error can't find the container with id 00b8623390973c8810d0f6e919281f5db94560c75411d5eb7cd2bd6397378b84 Dec 09 11:49:12 crc kubenswrapper[4849]: W1209 11:49:12.487920 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb40c8de_3699_4c66_be24_cc3f9c55bf6d.slice/crio-1a81077af902e8e50cf93d79a0c132201342749886d62781f8d777fde4acf919 WatchSource:0}: Error finding container 1a81077af902e8e50cf93d79a0c132201342749886d62781f8d777fde4acf919: Status 404 returned error can't find the container with id 1a81077af902e8e50cf93d79a0c132201342749886d62781f8d777fde4acf919 Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.496046 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-xv67s"] Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.639608 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nrffl"] Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.640902 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.644910 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.645129 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.686739 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nrffl"] Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.800466 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nrffl\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.800557 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-scripts\") pod \"nova-cell1-conductor-db-sync-nrffl\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.800611 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-config-data\") pod \"nova-cell1-conductor-db-sync-nrffl\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.800655 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtqlr\" (UniqueName: \"kubernetes.io/projected/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-kube-api-access-dtqlr\") pod \"nova-cell1-conductor-db-sync-nrffl\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.903068 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nrffl\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.903171 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-scripts\") pod \"nova-cell1-conductor-db-sync-nrffl\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.903229 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-config-data\") pod \"nova-cell1-conductor-db-sync-nrffl\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.903273 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtqlr\" (UniqueName: \"kubernetes.io/projected/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-kube-api-access-dtqlr\") pod \"nova-cell1-conductor-db-sync-nrffl\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.908590 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-scripts\") pod \"nova-cell1-conductor-db-sync-nrffl\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.909911 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-config-data\") pod \"nova-cell1-conductor-db-sync-nrffl\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.910524 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nrffl\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.929769 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtqlr\" (UniqueName: \"kubernetes.io/projected/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-kube-api-access-dtqlr\") pod \"nova-cell1-conductor-db-sync-nrffl\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:12 crc kubenswrapper[4849]: I1209 11:49:12.998459 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:13 crc kubenswrapper[4849]: I1209 11:49:13.201164 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 11:49:13 crc kubenswrapper[4849]: I1209 11:49:13.306575 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9a386901-7188-4596-8fa6-d007406d2bbf","Type":"ContainerStarted","Data":"94534423595cbbd006ba02a43eef4dd226b6ab343c25d13fef9fb2f986931b33"} Dec 09 11:49:13 crc kubenswrapper[4849]: I1209 11:49:13.309798 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12a90b3c-512f-4d27-89a8-b9c714a81952","Type":"ContainerStarted","Data":"00b8623390973c8810d0f6e919281f5db94560c75411d5eb7cd2bd6397378b84"} Dec 09 11:49:13 crc kubenswrapper[4849]: I1209 11:49:13.312607 4849 generic.go:334] "Generic (PLEG): container finished" podID="db40c8de-3699-4c66-be24-cc3f9c55bf6d" containerID="294ad97c3b7ac7b22f5626086dfd3083852115760f4ddb967951e1f4eaed6dd4" exitCode=0 Dec 09 11:49:13 crc kubenswrapper[4849]: I1209 11:49:13.313537 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-xv67s" event={"ID":"db40c8de-3699-4c66-be24-cc3f9c55bf6d","Type":"ContainerDied","Data":"294ad97c3b7ac7b22f5626086dfd3083852115760f4ddb967951e1f4eaed6dd4"} Dec 09 11:49:13 crc kubenswrapper[4849]: I1209 11:49:13.313564 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-xv67s" event={"ID":"db40c8de-3699-4c66-be24-cc3f9c55bf6d","Type":"ContainerStarted","Data":"1a81077af902e8e50cf93d79a0c132201342749886d62781f8d777fde4acf919"} Dec 09 11:49:13 crc kubenswrapper[4849]: I1209 11:49:13.778254 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nrffl"] Dec 09 11:49:14 crc kubenswrapper[4849]: I1209 11:49:14.338674 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-xv67s" event={"ID":"db40c8de-3699-4c66-be24-cc3f9c55bf6d","Type":"ContainerStarted","Data":"1d8cbcf4455793d746c47187ebb6f8c13e31141c875be5823b564d7ccb80c254"} Dec 09 11:49:14 crc kubenswrapper[4849]: I1209 11:49:14.340239 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:14 crc kubenswrapper[4849]: I1209 11:49:14.350022 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nrffl" event={"ID":"db78fd7e-e02f-4ffa-9c38-675b7b021cc7","Type":"ContainerStarted","Data":"f3eda02d24fc428570df77d00f42f74cfa8b8429a47212e4d92a16e1f17d15c7"} Dec 09 11:49:14 crc kubenswrapper[4849]: I1209 11:49:14.350060 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nrffl" event={"ID":"db78fd7e-e02f-4ffa-9c38-675b7b021cc7","Type":"ContainerStarted","Data":"ea2306d4a745803ce4dacbd47378493beb5ed2b3b901e7cad90a4589103a3798"} Dec 09 11:49:14 crc kubenswrapper[4849]: I1209 11:49:14.378247 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-xv67s" podStartSLOduration=4.378230377 podStartE2EDuration="4.378230377s" podCreationTimestamp="2025-12-09 11:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:49:14.36918059 +0000 UTC m=+1336.909064906" watchObservedRunningTime="2025-12-09 11:49:14.378230377 +0000 UTC m=+1336.918114693" Dec 09 11:49:14 crc kubenswrapper[4849]: I1209 11:49:14.394425 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-nrffl" podStartSLOduration=2.394386069 podStartE2EDuration="2.394386069s" podCreationTimestamp="2025-12-09 11:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:49:14.392517633 +0000 UTC m=+1336.932401949" watchObservedRunningTime="2025-12-09 11:49:14.394386069 +0000 UTC m=+1336.934270385" Dec 09 11:49:15 crc kubenswrapper[4849]: I1209 11:49:15.721265 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:49:15 crc kubenswrapper[4849]: I1209 11:49:15.743634 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.180020 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.184934 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="31f3ac0d-dbb7-4371-8718-ddfafd5481f7" containerName="kube-state-metrics" containerID="cri-o://194e5c617c80510f556f08b123a5da6fd8d5780b77d8e84e7287ea684ea14bbb" gracePeriod=30 Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.408705 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9a386901-7188-4596-8fa6-d007406d2bbf","Type":"ContainerStarted","Data":"7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd"} Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.434310 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"49f8d36e-53c6-4ae6-a088-ee76c48897af","Type":"ContainerStarted","Data":"52c856d333364772df959d137d4c905f5f35235af62fefd2ca9cc3e5ddd92ea0"} Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.434376 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"49f8d36e-53c6-4ae6-a088-ee76c48897af","Type":"ContainerStarted","Data":"81fe28242a6913283a58a3558807e4b5a46d9c513891346c8d6c028bb24d95d8"} Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.441199 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12a90b3c-512f-4d27-89a8-b9c714a81952","Type":"ContainerStarted","Data":"d11ced4db23caed73033791f97c5ecc8dba45a86e9b563113a813e100a0bacdb"} Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.441239 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12a90b3c-512f-4d27-89a8-b9c714a81952","Type":"ContainerStarted","Data":"75c1c13f941c111ad4b60da225fba243f46549aa4bb8310446fb5e23122303fd"} Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.441299 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="12a90b3c-512f-4d27-89a8-b9c714a81952" containerName="nova-metadata-log" containerID="cri-o://75c1c13f941c111ad4b60da225fba243f46549aa4bb8310446fb5e23122303fd" gracePeriod=30 Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.441316 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="12a90b3c-512f-4d27-89a8-b9c714a81952" containerName="nova-metadata-metadata" containerID="cri-o://d11ced4db23caed73033791f97c5ecc8dba45a86e9b563113a813e100a0bacdb" gracePeriod=30 Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.447279 4849 generic.go:334] "Generic (PLEG): container finished" podID="31f3ac0d-dbb7-4371-8718-ddfafd5481f7" containerID="194e5c617c80510f556f08b123a5da6fd8d5780b77d8e84e7287ea684ea14bbb" exitCode=2 Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.447356 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"31f3ac0d-dbb7-4371-8718-ddfafd5481f7","Type":"ContainerDied","Data":"194e5c617c80510f556f08b123a5da6fd8d5780b77d8e84e7287ea684ea14bbb"} Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.450092 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"073f7523-bbfd-4875-a17a-f9034464cb01","Type":"ContainerStarted","Data":"4ab450a931d9e553727d5d193a5a45b3bfbb6e5fb78bdfcd71058bb9e38958ce"} Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.450242 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="073f7523-bbfd-4875-a17a-f9034464cb01" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4ab450a931d9e553727d5d193a5a45b3bfbb6e5fb78bdfcd71058bb9e38958ce" gracePeriod=30 Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.488123 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.54172427 podStartE2EDuration="8.488100242s" podCreationTimestamp="2025-12-09 11:49:10 +0000 UTC" firstStartedPulling="2025-12-09 11:49:12.339072387 +0000 UTC m=+1334.878956703" lastFinishedPulling="2025-12-09 11:49:17.285448359 +0000 UTC m=+1339.825332675" observedRunningTime="2025-12-09 11:49:18.46316641 +0000 UTC m=+1341.003050726" watchObservedRunningTime="2025-12-09 11:49:18.488100242 +0000 UTC m=+1341.027984558" Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.508919 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.284275746 podStartE2EDuration="8.508899041s" podCreationTimestamp="2025-12-09 11:49:10 +0000 UTC" firstStartedPulling="2025-12-09 11:49:12.107623231 +0000 UTC m=+1334.647507547" lastFinishedPulling="2025-12-09 11:49:17.332246516 +0000 UTC m=+1339.872130842" observedRunningTime="2025-12-09 11:49:18.484728008 +0000 UTC m=+1341.024612324" watchObservedRunningTime="2025-12-09 11:49:18.508899041 +0000 UTC m=+1341.048783357" Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.593776 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.648130816 podStartE2EDuration="8.593757979s" podCreationTimestamp="2025-12-09 11:49:10 +0000 UTC" firstStartedPulling="2025-12-09 11:49:12.338919373 +0000 UTC m=+1334.878803689" lastFinishedPulling="2025-12-09 11:49:17.284546536 +0000 UTC m=+1339.824430852" observedRunningTime="2025-12-09 11:49:18.506816789 +0000 UTC m=+1341.046701105" watchObservedRunningTime="2025-12-09 11:49:18.593757979 +0000 UTC m=+1341.133642295" Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.623383 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.444506064 podStartE2EDuration="8.623358488s" podCreationTimestamp="2025-12-09 11:49:10 +0000 UTC" firstStartedPulling="2025-12-09 11:49:12.108036721 +0000 UTC m=+1334.647921037" lastFinishedPulling="2025-12-09 11:49:17.286889145 +0000 UTC m=+1339.826773461" observedRunningTime="2025-12-09 11:49:18.568004286 +0000 UTC m=+1341.107888612" watchObservedRunningTime="2025-12-09 11:49:18.623358488 +0000 UTC m=+1341.163242814" Dec 09 11:49:18 crc kubenswrapper[4849]: I1209 11:49:18.965077 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.191426 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8s5s\" (UniqueName: \"kubernetes.io/projected/31f3ac0d-dbb7-4371-8718-ddfafd5481f7-kube-api-access-m8s5s\") pod \"31f3ac0d-dbb7-4371-8718-ddfafd5481f7\" (UID: \"31f3ac0d-dbb7-4371-8718-ddfafd5481f7\") " Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.208609 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f3ac0d-dbb7-4371-8718-ddfafd5481f7-kube-api-access-m8s5s" (OuterVolumeSpecName: "kube-api-access-m8s5s") pod "31f3ac0d-dbb7-4371-8718-ddfafd5481f7" (UID: "31f3ac0d-dbb7-4371-8718-ddfafd5481f7"). InnerVolumeSpecName "kube-api-access-m8s5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.293519 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8s5s\" (UniqueName: \"kubernetes.io/projected/31f3ac0d-dbb7-4371-8718-ddfafd5481f7-kube-api-access-m8s5s\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.466780 4849 generic.go:334] "Generic (PLEG): container finished" podID="12a90b3c-512f-4d27-89a8-b9c714a81952" containerID="d11ced4db23caed73033791f97c5ecc8dba45a86e9b563113a813e100a0bacdb" exitCode=0 Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.466810 4849 generic.go:334] "Generic (PLEG): container finished" podID="12a90b3c-512f-4d27-89a8-b9c714a81952" containerID="75c1c13f941c111ad4b60da225fba243f46549aa4bb8310446fb5e23122303fd" exitCode=143 Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.466852 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12a90b3c-512f-4d27-89a8-b9c714a81952","Type":"ContainerDied","Data":"d11ced4db23caed73033791f97c5ecc8dba45a86e9b563113a813e100a0bacdb"} Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.466879 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12a90b3c-512f-4d27-89a8-b9c714a81952","Type":"ContainerDied","Data":"75c1c13f941c111ad4b60da225fba243f46549aa4bb8310446fb5e23122303fd"} Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.468624 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"31f3ac0d-dbb7-4371-8718-ddfafd5481f7","Type":"ContainerDied","Data":"7afb2718c867c1fbea74164930b94b19ec4b5c3035be67f61cc41a32b2dc728e"} Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.468683 4849 scope.go:117] "RemoveContainer" containerID="194e5c617c80510f556f08b123a5da6fd8d5780b77d8e84e7287ea684ea14bbb" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.468847 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.523499 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.534519 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.551815 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 11:49:19 crc kubenswrapper[4849]: E1209 11:49:19.552505 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f3ac0d-dbb7-4371-8718-ddfafd5481f7" containerName="kube-state-metrics" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.552613 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f3ac0d-dbb7-4371-8718-ddfafd5481f7" containerName="kube-state-metrics" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.552941 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f3ac0d-dbb7-4371-8718-ddfafd5481f7" containerName="kube-state-metrics" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.553861 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.560485 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.560750 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.592458 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.705500 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9ed08f18-45d0-4623-848f-ebfacdb7b421-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9ed08f18-45d0-4623-848f-ebfacdb7b421\") " pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.705589 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkkgg\" (UniqueName: \"kubernetes.io/projected/9ed08f18-45d0-4623-848f-ebfacdb7b421-kube-api-access-gkkgg\") pod \"kube-state-metrics-0\" (UID: \"9ed08f18-45d0-4623-848f-ebfacdb7b421\") " pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.705650 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ed08f18-45d0-4623-848f-ebfacdb7b421-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9ed08f18-45d0-4623-848f-ebfacdb7b421\") " pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.705671 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed08f18-45d0-4623-848f-ebfacdb7b421-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9ed08f18-45d0-4623-848f-ebfacdb7b421\") " pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.808301 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9ed08f18-45d0-4623-848f-ebfacdb7b421-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9ed08f18-45d0-4623-848f-ebfacdb7b421\") " pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.808371 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkkgg\" (UniqueName: \"kubernetes.io/projected/9ed08f18-45d0-4623-848f-ebfacdb7b421-kube-api-access-gkkgg\") pod \"kube-state-metrics-0\" (UID: \"9ed08f18-45d0-4623-848f-ebfacdb7b421\") " pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.808462 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ed08f18-45d0-4623-848f-ebfacdb7b421-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9ed08f18-45d0-4623-848f-ebfacdb7b421\") " pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.808489 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed08f18-45d0-4623-848f-ebfacdb7b421-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9ed08f18-45d0-4623-848f-ebfacdb7b421\") " pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.814600 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ed08f18-45d0-4623-848f-ebfacdb7b421-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9ed08f18-45d0-4623-848f-ebfacdb7b421\") " pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.815801 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed08f18-45d0-4623-848f-ebfacdb7b421-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9ed08f18-45d0-4623-848f-ebfacdb7b421\") " pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.834357 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkkgg\" (UniqueName: \"kubernetes.io/projected/9ed08f18-45d0-4623-848f-ebfacdb7b421-kube-api-access-gkkgg\") pod \"kube-state-metrics-0\" (UID: \"9ed08f18-45d0-4623-848f-ebfacdb7b421\") " pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.838247 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9ed08f18-45d0-4623-848f-ebfacdb7b421-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9ed08f18-45d0-4623-848f-ebfacdb7b421\") " pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.877901 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 11:49:19 crc kubenswrapper[4849]: I1209 11:49:19.972716 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.113255 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12a90b3c-512f-4d27-89a8-b9c714a81952-config-data\") pod \"12a90b3c-512f-4d27-89a8-b9c714a81952\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.113347 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12a90b3c-512f-4d27-89a8-b9c714a81952-logs\") pod \"12a90b3c-512f-4d27-89a8-b9c714a81952\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.113573 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx9xq\" (UniqueName: \"kubernetes.io/projected/12a90b3c-512f-4d27-89a8-b9c714a81952-kube-api-access-gx9xq\") pod \"12a90b3c-512f-4d27-89a8-b9c714a81952\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.113641 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12a90b3c-512f-4d27-89a8-b9c714a81952-combined-ca-bundle\") pod \"12a90b3c-512f-4d27-89a8-b9c714a81952\" (UID: \"12a90b3c-512f-4d27-89a8-b9c714a81952\") " Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.114677 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12a90b3c-512f-4d27-89a8-b9c714a81952-logs" (OuterVolumeSpecName: "logs") pod "12a90b3c-512f-4d27-89a8-b9c714a81952" (UID: "12a90b3c-512f-4d27-89a8-b9c714a81952"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.128719 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a90b3c-512f-4d27-89a8-b9c714a81952-kube-api-access-gx9xq" (OuterVolumeSpecName: "kube-api-access-gx9xq") pod "12a90b3c-512f-4d27-89a8-b9c714a81952" (UID: "12a90b3c-512f-4d27-89a8-b9c714a81952"). InnerVolumeSpecName "kube-api-access-gx9xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.205335 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12a90b3c-512f-4d27-89a8-b9c714a81952-config-data" (OuterVolumeSpecName: "config-data") pod "12a90b3c-512f-4d27-89a8-b9c714a81952" (UID: "12a90b3c-512f-4d27-89a8-b9c714a81952"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.217185 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx9xq\" (UniqueName: \"kubernetes.io/projected/12a90b3c-512f-4d27-89a8-b9c714a81952-kube-api-access-gx9xq\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.217286 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12a90b3c-512f-4d27-89a8-b9c714a81952-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.217939 4849 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12a90b3c-512f-4d27-89a8-b9c714a81952-logs\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.230405 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12a90b3c-512f-4d27-89a8-b9c714a81952-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12a90b3c-512f-4d27-89a8-b9c714a81952" (UID: "12a90b3c-512f-4d27-89a8-b9c714a81952"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.319833 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12a90b3c-512f-4d27-89a8-b9c714a81952-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.480356 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12a90b3c-512f-4d27-89a8-b9c714a81952","Type":"ContainerDied","Data":"00b8623390973c8810d0f6e919281f5db94560c75411d5eb7cd2bd6397378b84"} Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.480423 4849 scope.go:117] "RemoveContainer" containerID="d11ced4db23caed73033791f97c5ecc8dba45a86e9b563113a813e100a0bacdb" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.480536 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.522158 4849 scope.go:117] "RemoveContainer" containerID="75c1c13f941c111ad4b60da225fba243f46549aa4bb8310446fb5e23122303fd" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.561723 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31f3ac0d-dbb7-4371-8718-ddfafd5481f7" path="/var/lib/kubelet/pods/31f3ac0d-dbb7-4371-8718-ddfafd5481f7/volumes" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.562543 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.620800 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.667820 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.681888 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.681933 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.719470 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:49:20 crc kubenswrapper[4849]: E1209 11:49:20.720158 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a90b3c-512f-4d27-89a8-b9c714a81952" containerName="nova-metadata-log" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.720231 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a90b3c-512f-4d27-89a8-b9c714a81952" containerName="nova-metadata-log" Dec 09 11:49:20 crc kubenswrapper[4849]: E1209 11:49:20.720318 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a90b3c-512f-4d27-89a8-b9c714a81952" containerName="nova-metadata-metadata" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.720380 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a90b3c-512f-4d27-89a8-b9c714a81952" containerName="nova-metadata-metadata" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.720641 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a90b3c-512f-4d27-89a8-b9c714a81952" containerName="nova-metadata-metadata" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.720720 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a90b3c-512f-4d27-89a8-b9c714a81952" containerName="nova-metadata-log" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.721863 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.729895 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.730086 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.742773 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.833096 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlg2n\" (UniqueName: \"kubernetes.io/projected/4eef29a4-126c-42cb-93dd-0ea36c59f82d-kube-api-access-wlg2n\") pod \"nova-metadata-0\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.833471 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.833524 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.833551 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-config-data\") pod \"nova-metadata-0\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.833641 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eef29a4-126c-42cb-93dd-0ea36c59f82d-logs\") pod \"nova-metadata-0\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.907177 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.941854 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.942206 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.942387 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-config-data\") pod \"nova-metadata-0\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.943902 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eef29a4-126c-42cb-93dd-0ea36c59f82d-logs\") pod \"nova-metadata-0\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.944398 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlg2n\" (UniqueName: \"kubernetes.io/projected/4eef29a4-126c-42cb-93dd-0ea36c59f82d-kube-api-access-wlg2n\") pod \"nova-metadata-0\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.945495 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eef29a4-126c-42cb-93dd-0ea36c59f82d-logs\") pod \"nova-metadata-0\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.946979 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.947933 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.948776 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-config-data\") pod \"nova-metadata-0\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " pod="openstack/nova-metadata-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.987544 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.988535 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.997107 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.997637 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="ceilometer-central-agent" containerID="cri-o://fffad01abbeaf2d78fdb855b92961167a0fd5a7fe766e5b6f28bb0960de84984" gracePeriod=30 Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.997854 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="proxy-httpd" containerID="cri-o://647a1aa88e4165d52ec84fd16b09332e66637e1c248fbcc1709e1bbe2b857513" gracePeriod=30 Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.997964 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="sg-core" containerID="cri-o://f2696f4e8fad7d3ade1d468af3bb0a37861808ffab3b09ec3ce196d54fdac99b" gracePeriod=30 Dec 09 11:49:20 crc kubenswrapper[4849]: I1209 11:49:20.998066 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="ceilometer-notification-agent" containerID="cri-o://4f4c4374de258dc43249b24f8318ae3f1a45a2c5120b596a356baad874fcaa8e" gracePeriod=30 Dec 09 11:49:21 crc kubenswrapper[4849]: I1209 11:49:21.002518 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlg2n\" (UniqueName: \"kubernetes.io/projected/4eef29a4-126c-42cb-93dd-0ea36c59f82d-kube-api-access-wlg2n\") pod \"nova-metadata-0\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " pod="openstack/nova-metadata-0" Dec 09 11:49:21 crc kubenswrapper[4849]: I1209 11:49:21.058290 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 11:49:21 crc kubenswrapper[4849]: I1209 11:49:21.126074 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 11:49:21 crc kubenswrapper[4849]: I1209 11:49:21.429548 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:49:21 crc kubenswrapper[4849]: I1209 11:49:21.563619 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-fsdmf"] Dec 09 11:49:21 crc kubenswrapper[4849]: I1209 11:49:21.563993 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9ed08f18-45d0-4623-848f-ebfacdb7b421","Type":"ContainerStarted","Data":"88326e2d45374c945eafec047319fd79082479fecfe96c976d6c635956e39a70"} Dec 09 11:49:21 crc kubenswrapper[4849]: I1209 11:49:21.564259 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" podUID="7a5d6732-8e11-475a-a7ea-b5d1588e5770" containerName="dnsmasq-dns" containerID="cri-o://acc0df9a8d7c73e96c591a3f7c327ebca1b724a8c1a017b82d8c2090a1da80f9" gracePeriod=10 Dec 09 11:49:21 crc kubenswrapper[4849]: I1209 11:49:21.588849 4849 generic.go:334] "Generic (PLEG): container finished" podID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerID="647a1aa88e4165d52ec84fd16b09332e66637e1c248fbcc1709e1bbe2b857513" exitCode=0 Dec 09 11:49:21 crc kubenswrapper[4849]: I1209 11:49:21.588881 4849 generic.go:334] "Generic (PLEG): container finished" podID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerID="f2696f4e8fad7d3ade1d468af3bb0a37861808ffab3b09ec3ce196d54fdac99b" exitCode=2 Dec 09 11:49:21 crc kubenswrapper[4849]: I1209 11:49:21.589787 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f3a1c-1152-43f6-b3ac-d1f79588d45d","Type":"ContainerDied","Data":"647a1aa88e4165d52ec84fd16b09332e66637e1c248fbcc1709e1bbe2b857513"} Dec 09 11:49:21 crc kubenswrapper[4849]: I1209 11:49:21.589814 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f3a1c-1152-43f6-b3ac-d1f79588d45d","Type":"ContainerDied","Data":"f2696f4e8fad7d3ade1d468af3bb0a37861808ffab3b09ec3ce196d54fdac99b"} Dec 09 11:49:21 crc kubenswrapper[4849]: I1209 11:49:21.666762 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 11:49:21 crc kubenswrapper[4849]: I1209 11:49:21.764611 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="49f8d36e-53c6-4ae6-a088-ee76c48897af" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.165:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 11:49:21 crc kubenswrapper[4849]: I1209 11:49:21.764902 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="49f8d36e-53c6-4ae6-a088-ee76c48897af" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.165:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 11:49:21 crc kubenswrapper[4849]: I1209 11:49:21.973083 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.558056 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12a90b3c-512f-4d27-89a8-b9c714a81952" path="/var/lib/kubelet/pods/12a90b3c-512f-4d27-89a8-b9c714a81952/volumes" Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.600935 4849 generic.go:334] "Generic (PLEG): container finished" podID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerID="fffad01abbeaf2d78fdb855b92961167a0fd5a7fe766e5b6f28bb0960de84984" exitCode=0 Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.601025 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f3a1c-1152-43f6-b3ac-d1f79588d45d","Type":"ContainerDied","Data":"fffad01abbeaf2d78fdb855b92961167a0fd5a7fe766e5b6f28bb0960de84984"} Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.609154 4849 generic.go:334] "Generic (PLEG): container finished" podID="7a5d6732-8e11-475a-a7ea-b5d1588e5770" containerID="acc0df9a8d7c73e96c591a3f7c327ebca1b724a8c1a017b82d8c2090a1da80f9" exitCode=0 Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.609230 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" event={"ID":"7a5d6732-8e11-475a-a7ea-b5d1588e5770","Type":"ContainerDied","Data":"acc0df9a8d7c73e96c591a3f7c327ebca1b724a8c1a017b82d8c2090a1da80f9"} Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.609272 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" event={"ID":"7a5d6732-8e11-475a-a7ea-b5d1588e5770","Type":"ContainerDied","Data":"0c69db71d5dadcad4a9cbccc7b9b882a66e5337283983aa2cd15203d3aa70383"} Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.609285 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c69db71d5dadcad4a9cbccc7b9b882a66e5337283983aa2cd15203d3aa70383" Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.613474 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4eef29a4-126c-42cb-93dd-0ea36c59f82d","Type":"ContainerStarted","Data":"209005778ea82dd530f035888c59dab0f25bffa5ed7e73f436880bcf4772002b"} Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.613518 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4eef29a4-126c-42cb-93dd-0ea36c59f82d","Type":"ContainerStarted","Data":"9b47b62a9f9183da8bb7553a3612a0ca6a773a58e6eba069762d725ed2c60915"} Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.624094 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9ed08f18-45d0-4623-848f-ebfacdb7b421","Type":"ContainerStarted","Data":"f15ea2511cb6c2bf6d17f0a3b7df774fe540fa3182e855ef4818f03aacfc8995"} Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.624188 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.662437 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.016687171 podStartE2EDuration="3.662394695s" podCreationTimestamp="2025-12-09 11:49:19 +0000 UTC" firstStartedPulling="2025-12-09 11:49:20.609004192 +0000 UTC m=+1343.148888508" lastFinishedPulling="2025-12-09 11:49:21.254711716 +0000 UTC m=+1343.794596032" observedRunningTime="2025-12-09 11:49:22.641961656 +0000 UTC m=+1345.181845972" watchObservedRunningTime="2025-12-09 11:49:22.662394695 +0000 UTC m=+1345.202279011" Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.717789 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.823056 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-config\") pod \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.823137 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x5zz\" (UniqueName: \"kubernetes.io/projected/7a5d6732-8e11-475a-a7ea-b5d1588e5770-kube-api-access-5x5zz\") pod \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.823244 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-dns-svc\") pod \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.823278 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-ovsdbserver-nb\") pod \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.823323 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-ovsdbserver-sb\") pod \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\" (UID: \"7a5d6732-8e11-475a-a7ea-b5d1588e5770\") " Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.835735 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5d6732-8e11-475a-a7ea-b5d1588e5770-kube-api-access-5x5zz" (OuterVolumeSpecName: "kube-api-access-5x5zz") pod "7a5d6732-8e11-475a-a7ea-b5d1588e5770" (UID: "7a5d6732-8e11-475a-a7ea-b5d1588e5770"). InnerVolumeSpecName "kube-api-access-5x5zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.925334 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7a5d6732-8e11-475a-a7ea-b5d1588e5770" (UID: "7a5d6732-8e11-475a-a7ea-b5d1588e5770"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.927783 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x5zz\" (UniqueName: \"kubernetes.io/projected/7a5d6732-8e11-475a-a7ea-b5d1588e5770-kube-api-access-5x5zz\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.927827 4849 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.948027 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-config" (OuterVolumeSpecName: "config") pod "7a5d6732-8e11-475a-a7ea-b5d1588e5770" (UID: "7a5d6732-8e11-475a-a7ea-b5d1588e5770"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.952494 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7a5d6732-8e11-475a-a7ea-b5d1588e5770" (UID: "7a5d6732-8e11-475a-a7ea-b5d1588e5770"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:49:22 crc kubenswrapper[4849]: I1209 11:49:22.959637 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7a5d6732-8e11-475a-a7ea-b5d1588e5770" (UID: "7a5d6732-8e11-475a-a7ea-b5d1588e5770"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.031179 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.031214 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.031224 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5d6732-8e11-475a-a7ea-b5d1588e5770-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.685964 4849 generic.go:334] "Generic (PLEG): container finished" podID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerID="4f4c4374de258dc43249b24f8318ae3f1a45a2c5120b596a356baad874fcaa8e" exitCode=0 Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.686431 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f3a1c-1152-43f6-b3ac-d1f79588d45d","Type":"ContainerDied","Data":"4f4c4374de258dc43249b24f8318ae3f1a45a2c5120b596a356baad874fcaa8e"} Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.700865 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsdmf" Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.703490 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4eef29a4-126c-42cb-93dd-0ea36c59f82d","Type":"ContainerStarted","Data":"d0116f167194b3f09c3ac78de100b62099c488807dc733da5f8d73b1ca76dc2a"} Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.761098 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.826123 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.826105577 podStartE2EDuration="3.826105577s" podCreationTimestamp="2025-12-09 11:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:49:23.771119945 +0000 UTC m=+1346.311004261" watchObservedRunningTime="2025-12-09 11:49:23.826105577 +0000 UTC m=+1346.365989893" Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.852047 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f3a1c-1152-43f6-b3ac-d1f79588d45d-log-httpd\") pod \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.852131 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld6wx\" (UniqueName: \"kubernetes.io/projected/695f3a1c-1152-43f6-b3ac-d1f79588d45d-kube-api-access-ld6wx\") pod \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.852175 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-combined-ca-bundle\") pod \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.852209 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-sg-core-conf-yaml\") pod \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.852321 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f3a1c-1152-43f6-b3ac-d1f79588d45d-run-httpd\") pod \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.852340 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-scripts\") pod \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.852375 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-config-data\") pod \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\" (UID: \"695f3a1c-1152-43f6-b3ac-d1f79588d45d\") " Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.853986 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/695f3a1c-1152-43f6-b3ac-d1f79588d45d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "695f3a1c-1152-43f6-b3ac-d1f79588d45d" (UID: "695f3a1c-1152-43f6-b3ac-d1f79588d45d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.854596 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/695f3a1c-1152-43f6-b3ac-d1f79588d45d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "695f3a1c-1152-43f6-b3ac-d1f79588d45d" (UID: "695f3a1c-1152-43f6-b3ac-d1f79588d45d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.879479 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-fsdmf"] Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.908808 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695f3a1c-1152-43f6-b3ac-d1f79588d45d-kube-api-access-ld6wx" (OuterVolumeSpecName: "kube-api-access-ld6wx") pod "695f3a1c-1152-43f6-b3ac-d1f79588d45d" (UID: "695f3a1c-1152-43f6-b3ac-d1f79588d45d"). InnerVolumeSpecName "kube-api-access-ld6wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.911328 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-fsdmf"] Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.968737 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-scripts" (OuterVolumeSpecName: "scripts") pod "695f3a1c-1152-43f6-b3ac-d1f79588d45d" (UID: "695f3a1c-1152-43f6-b3ac-d1f79588d45d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.970052 4849 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f3a1c-1152-43f6-b3ac-d1f79588d45d-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.970076 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld6wx\" (UniqueName: \"kubernetes.io/projected/695f3a1c-1152-43f6-b3ac-d1f79588d45d-kube-api-access-ld6wx\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.970087 4849 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f3a1c-1152-43f6-b3ac-d1f79588d45d-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.970095 4849 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:23 crc kubenswrapper[4849]: I1209 11:49:23.992033 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "695f3a1c-1152-43f6-b3ac-d1f79588d45d" (UID: "695f3a1c-1152-43f6-b3ac-d1f79588d45d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.078773 4849 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.126929 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-config-data" (OuterVolumeSpecName: "config-data") pod "695f3a1c-1152-43f6-b3ac-d1f79588d45d" (UID: "695f3a1c-1152-43f6-b3ac-d1f79588d45d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.158669 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "695f3a1c-1152-43f6-b3ac-d1f79588d45d" (UID: "695f3a1c-1152-43f6-b3ac-d1f79588d45d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.180422 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.180459 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f3a1c-1152-43f6-b3ac-d1f79588d45d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.549187 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a5d6732-8e11-475a-a7ea-b5d1588e5770" path="/var/lib/kubelet/pods/7a5d6732-8e11-475a-a7ea-b5d1588e5770/volumes" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.711215 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.711683 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f3a1c-1152-43f6-b3ac-d1f79588d45d","Type":"ContainerDied","Data":"1a2c1ae56dd039023c6a24232ac3623bec442d80f9b7c331841b7cebc9596509"} Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.711713 4849 scope.go:117] "RemoveContainer" containerID="647a1aa88e4165d52ec84fd16b09332e66637e1c248fbcc1709e1bbe2b857513" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.734508 4849 scope.go:117] "RemoveContainer" containerID="f2696f4e8fad7d3ade1d468af3bb0a37861808ffab3b09ec3ce196d54fdac99b" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.736625 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.744231 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.755156 4849 scope.go:117] "RemoveContainer" containerID="4f4c4374de258dc43249b24f8318ae3f1a45a2c5120b596a356baad874fcaa8e" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.767359 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:49:24 crc kubenswrapper[4849]: E1209 11:49:24.773812 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="sg-core" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.773836 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="sg-core" Dec 09 11:49:24 crc kubenswrapper[4849]: E1209 11:49:24.773848 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="proxy-httpd" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.773853 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="proxy-httpd" Dec 09 11:49:24 crc kubenswrapper[4849]: E1209 11:49:24.773874 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5d6732-8e11-475a-a7ea-b5d1588e5770" containerName="init" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.773880 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5d6732-8e11-475a-a7ea-b5d1588e5770" containerName="init" Dec 09 11:49:24 crc kubenswrapper[4849]: E1209 11:49:24.773894 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5d6732-8e11-475a-a7ea-b5d1588e5770" containerName="dnsmasq-dns" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.773899 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5d6732-8e11-475a-a7ea-b5d1588e5770" containerName="dnsmasq-dns" Dec 09 11:49:24 crc kubenswrapper[4849]: E1209 11:49:24.773912 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="ceilometer-central-agent" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.773918 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="ceilometer-central-agent" Dec 09 11:49:24 crc kubenswrapper[4849]: E1209 11:49:24.773930 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="ceilometer-notification-agent" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.773936 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="ceilometer-notification-agent" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.774100 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="ceilometer-central-agent" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.774113 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="proxy-httpd" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.774127 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="ceilometer-notification-agent" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.774138 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5d6732-8e11-475a-a7ea-b5d1588e5770" containerName="dnsmasq-dns" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.774148 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" containerName="sg-core" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.775691 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.778831 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.779026 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.780627 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.799078 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.804852 4849 scope.go:117] "RemoveContainer" containerID="fffad01abbeaf2d78fdb855b92961167a0fd5a7fe766e5b6f28bb0960de84984" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.892612 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.892752 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.892806 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.892845 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j58d\" (UniqueName: \"kubernetes.io/projected/a7426fff-9173-428f-949a-270118263742-kube-api-access-2j58d\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.892880 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7426fff-9173-428f-949a-270118263742-run-httpd\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.892911 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-scripts\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.892940 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-config-data\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.892978 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7426fff-9173-428f-949a-270118263742-log-httpd\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.994194 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.994318 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.994364 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.994382 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j58d\" (UniqueName: \"kubernetes.io/projected/a7426fff-9173-428f-949a-270118263742-kube-api-access-2j58d\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.994438 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7426fff-9173-428f-949a-270118263742-run-httpd\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.994465 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-scripts\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.994496 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-config-data\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.994536 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7426fff-9173-428f-949a-270118263742-log-httpd\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.995087 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7426fff-9173-428f-949a-270118263742-log-httpd\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.996157 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7426fff-9173-428f-949a-270118263742-run-httpd\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.999065 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.999161 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:24 crc kubenswrapper[4849]: I1209 11:49:24.999301 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:25 crc kubenswrapper[4849]: I1209 11:49:24.999892 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-scripts\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:25 crc kubenswrapper[4849]: I1209 11:49:25.000116 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-config-data\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:25 crc kubenswrapper[4849]: I1209 11:49:25.020290 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j58d\" (UniqueName: \"kubernetes.io/projected/a7426fff-9173-428f-949a-270118263742-kube-api-access-2j58d\") pod \"ceilometer-0\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " pod="openstack/ceilometer-0" Dec 09 11:49:25 crc kubenswrapper[4849]: I1209 11:49:25.094023 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:49:25 crc kubenswrapper[4849]: I1209 11:49:25.648340 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:49:25 crc kubenswrapper[4849]: I1209 11:49:25.722951 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7426fff-9173-428f-949a-270118263742","Type":"ContainerStarted","Data":"ea525d04047ca7c023e1cfa3e393b01da20198fab55682bb0910ff4a7c04c13d"} Dec 09 11:49:26 crc kubenswrapper[4849]: I1209 11:49:26.059871 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 11:49:26 crc kubenswrapper[4849]: I1209 11:49:26.060274 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 11:49:26 crc kubenswrapper[4849]: I1209 11:49:26.549679 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695f3a1c-1152-43f6-b3ac-d1f79588d45d" path="/var/lib/kubelet/pods/695f3a1c-1152-43f6-b3ac-d1f79588d45d/volumes" Dec 09 11:49:26 crc kubenswrapper[4849]: I1209 11:49:26.736603 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7426fff-9173-428f-949a-270118263742","Type":"ContainerStarted","Data":"7e0ba3f404c85cde2a42eaa79a18dbfb54344b4f50cad6c0ab779d098be84472"} Dec 09 11:49:27 crc kubenswrapper[4849]: I1209 11:49:27.747673 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7426fff-9173-428f-949a-270118263742","Type":"ContainerStarted","Data":"7650cf333ecd17b0e85349652b79249bac3666f7b903e05ccf2e0b389dbf6e32"} Dec 09 11:49:27 crc kubenswrapper[4849]: I1209 11:49:27.749562 4849 generic.go:334] "Generic (PLEG): container finished" podID="db78fd7e-e02f-4ffa-9c38-675b7b021cc7" containerID="f3eda02d24fc428570df77d00f42f74cfa8b8429a47212e4d92a16e1f17d15c7" exitCode=0 Dec 09 11:49:27 crc kubenswrapper[4849]: I1209 11:49:27.749650 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nrffl" event={"ID":"db78fd7e-e02f-4ffa-9c38-675b7b021cc7","Type":"ContainerDied","Data":"f3eda02d24fc428570df77d00f42f74cfa8b8429a47212e4d92a16e1f17d15c7"} Dec 09 11:49:27 crc kubenswrapper[4849]: I1209 11:49:27.751507 4849 generic.go:334] "Generic (PLEG): container finished" podID="2a968b26-11b2-421b-89bc-d481ce7ebe0a" containerID="5f61ede7d81f986af270ddec4f312688d608480c925b7b5923a61e2788a7c3c5" exitCode=0 Dec 09 11:49:27 crc kubenswrapper[4849]: I1209 11:49:27.751553 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7qvhg" event={"ID":"2a968b26-11b2-421b-89bc-d481ce7ebe0a","Type":"ContainerDied","Data":"5f61ede7d81f986af270ddec4f312688d608480c925b7b5923a61e2788a7c3c5"} Dec 09 11:49:28 crc kubenswrapper[4849]: I1209 11:49:28.763038 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7426fff-9173-428f-949a-270118263742","Type":"ContainerStarted","Data":"4b4e06dc563ab993cea69a4df078840ecfb7e9478ca1fda5ea2403f921f388fb"} Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.299528 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.327073 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.403976 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtqlr\" (UniqueName: \"kubernetes.io/projected/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-kube-api-access-dtqlr\") pod \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.404038 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-combined-ca-bundle\") pod \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.404105 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-config-data\") pod \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.404251 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-scripts\") pod \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\" (UID: \"db78fd7e-e02f-4ffa-9c38-675b7b021cc7\") " Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.412654 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-scripts" (OuterVolumeSpecName: "scripts") pod "db78fd7e-e02f-4ffa-9c38-675b7b021cc7" (UID: "db78fd7e-e02f-4ffa-9c38-675b7b021cc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.414219 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-kube-api-access-dtqlr" (OuterVolumeSpecName: "kube-api-access-dtqlr") pod "db78fd7e-e02f-4ffa-9c38-675b7b021cc7" (UID: "db78fd7e-e02f-4ffa-9c38-675b7b021cc7"). InnerVolumeSpecName "kube-api-access-dtqlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.432453 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-config-data" (OuterVolumeSpecName: "config-data") pod "db78fd7e-e02f-4ffa-9c38-675b7b021cc7" (UID: "db78fd7e-e02f-4ffa-9c38-675b7b021cc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.434751 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db78fd7e-e02f-4ffa-9c38-675b7b021cc7" (UID: "db78fd7e-e02f-4ffa-9c38-675b7b021cc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.506047 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-scripts\") pod \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.506129 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-combined-ca-bundle\") pod \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.506213 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-config-data\") pod \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.506589 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pgmh\" (UniqueName: \"kubernetes.io/projected/2a968b26-11b2-421b-89bc-d481ce7ebe0a-kube-api-access-7pgmh\") pod \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\" (UID: \"2a968b26-11b2-421b-89bc-d481ce7ebe0a\") " Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.508327 4849 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.508348 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtqlr\" (UniqueName: \"kubernetes.io/projected/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-kube-api-access-dtqlr\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.508373 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.508383 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db78fd7e-e02f-4ffa-9c38-675b7b021cc7-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.511781 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-scripts" (OuterVolumeSpecName: "scripts") pod "2a968b26-11b2-421b-89bc-d481ce7ebe0a" (UID: "2a968b26-11b2-421b-89bc-d481ce7ebe0a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.516422 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a968b26-11b2-421b-89bc-d481ce7ebe0a-kube-api-access-7pgmh" (OuterVolumeSpecName: "kube-api-access-7pgmh") pod "2a968b26-11b2-421b-89bc-d481ce7ebe0a" (UID: "2a968b26-11b2-421b-89bc-d481ce7ebe0a"). InnerVolumeSpecName "kube-api-access-7pgmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.537676 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-config-data" (OuterVolumeSpecName: "config-data") pod "2a968b26-11b2-421b-89bc-d481ce7ebe0a" (UID: "2a968b26-11b2-421b-89bc-d481ce7ebe0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.539697 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a968b26-11b2-421b-89bc-d481ce7ebe0a" (UID: "2a968b26-11b2-421b-89bc-d481ce7ebe0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.609501 4849 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.609538 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.609551 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a968b26-11b2-421b-89bc-d481ce7ebe0a-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.609560 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pgmh\" (UniqueName: \"kubernetes.io/projected/2a968b26-11b2-421b-89bc-d481ce7ebe0a-kube-api-access-7pgmh\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.775929 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nrffl" event={"ID":"db78fd7e-e02f-4ffa-9c38-675b7b021cc7","Type":"ContainerDied","Data":"ea2306d4a745803ce4dacbd47378493beb5ed2b3b901e7cad90a4589103a3798"} Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.775982 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea2306d4a745803ce4dacbd47378493beb5ed2b3b901e7cad90a4589103a3798" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.777610 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nrffl" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.778707 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7qvhg" event={"ID":"2a968b26-11b2-421b-89bc-d481ce7ebe0a","Type":"ContainerDied","Data":"0fd34d09265fbb6fc70002d7a150eec18205ed4dc91623582e77209b2ceba324"} Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.778757 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fd34d09265fbb6fc70002d7a150eec18205ed4dc91623582e77209b2ceba324" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.778845 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7qvhg" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.784346 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7426fff-9173-428f-949a-270118263742","Type":"ContainerStarted","Data":"13a79cdb1c11110c1815593c2069411b00d8bfc33908bae64baf1dbc6706d401"} Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.785907 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.817360 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.350957307 podStartE2EDuration="5.817331474s" podCreationTimestamp="2025-12-09 11:49:24 +0000 UTC" firstStartedPulling="2025-12-09 11:49:25.684864194 +0000 UTC m=+1348.224748510" lastFinishedPulling="2025-12-09 11:49:29.151238361 +0000 UTC m=+1351.691122677" observedRunningTime="2025-12-09 11:49:29.81116006 +0000 UTC m=+1352.351044386" watchObservedRunningTime="2025-12-09 11:49:29.817331474 +0000 UTC m=+1352.357215790" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.872561 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 11:49:29 crc kubenswrapper[4849]: E1209 11:49:29.873046 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db78fd7e-e02f-4ffa-9c38-675b7b021cc7" containerName="nova-cell1-conductor-db-sync" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.873069 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="db78fd7e-e02f-4ffa-9c38-675b7b021cc7" containerName="nova-cell1-conductor-db-sync" Dec 09 11:49:29 crc kubenswrapper[4849]: E1209 11:49:29.873085 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a968b26-11b2-421b-89bc-d481ce7ebe0a" containerName="nova-manage" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.873094 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a968b26-11b2-421b-89bc-d481ce7ebe0a" containerName="nova-manage" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.873310 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="db78fd7e-e02f-4ffa-9c38-675b7b021cc7" containerName="nova-cell1-conductor-db-sync" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.873336 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a968b26-11b2-421b-89bc-d481ce7ebe0a" containerName="nova-manage" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.874185 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.876425 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.895431 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 11:49:29 crc kubenswrapper[4849]: I1209 11:49:29.908687 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.032919 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chswq\" (UniqueName: \"kubernetes.io/projected/1669bf2d-c24f-46a6-9cdf-1f28689a44b2-kube-api-access-chswq\") pod \"nova-cell1-conductor-0\" (UID: \"1669bf2d-c24f-46a6-9cdf-1f28689a44b2\") " pod="openstack/nova-cell1-conductor-0" Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.033023 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1669bf2d-c24f-46a6-9cdf-1f28689a44b2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1669bf2d-c24f-46a6-9cdf-1f28689a44b2\") " pod="openstack/nova-cell1-conductor-0" Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.033084 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1669bf2d-c24f-46a6-9cdf-1f28689a44b2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1669bf2d-c24f-46a6-9cdf-1f28689a44b2\") " pod="openstack/nova-cell1-conductor-0" Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.115522 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.115861 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="49f8d36e-53c6-4ae6-a088-ee76c48897af" containerName="nova-api-log" containerID="cri-o://81fe28242a6913283a58a3558807e4b5a46d9c513891346c8d6c028bb24d95d8" gracePeriod=30 Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.115935 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="49f8d36e-53c6-4ae6-a088-ee76c48897af" containerName="nova-api-api" containerID="cri-o://52c856d333364772df959d137d4c905f5f35235af62fefd2ca9cc3e5ddd92ea0" gracePeriod=30 Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.133289 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.134163 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1669bf2d-c24f-46a6-9cdf-1f28689a44b2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1669bf2d-c24f-46a6-9cdf-1f28689a44b2\") " pod="openstack/nova-cell1-conductor-0" Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.134239 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chswq\" (UniqueName: \"kubernetes.io/projected/1669bf2d-c24f-46a6-9cdf-1f28689a44b2-kube-api-access-chswq\") pod \"nova-cell1-conductor-0\" (UID: \"1669bf2d-c24f-46a6-9cdf-1f28689a44b2\") " pod="openstack/nova-cell1-conductor-0" Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.134244 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9a386901-7188-4596-8fa6-d007406d2bbf" containerName="nova-scheduler-scheduler" containerID="cri-o://7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd" gracePeriod=30 Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.134324 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1669bf2d-c24f-46a6-9cdf-1f28689a44b2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1669bf2d-c24f-46a6-9cdf-1f28689a44b2\") " pod="openstack/nova-cell1-conductor-0" Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.149190 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1669bf2d-c24f-46a6-9cdf-1f28689a44b2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1669bf2d-c24f-46a6-9cdf-1f28689a44b2\") " pod="openstack/nova-cell1-conductor-0" Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.149691 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1669bf2d-c24f-46a6-9cdf-1f28689a44b2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1669bf2d-c24f-46a6-9cdf-1f28689a44b2\") " pod="openstack/nova-cell1-conductor-0" Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.159639 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chswq\" (UniqueName: \"kubernetes.io/projected/1669bf2d-c24f-46a6-9cdf-1f28689a44b2-kube-api-access-chswq\") pod \"nova-cell1-conductor-0\" (UID: \"1669bf2d-c24f-46a6-9cdf-1f28689a44b2\") " pod="openstack/nova-cell1-conductor-0" Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.162758 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.163002 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4eef29a4-126c-42cb-93dd-0ea36c59f82d" containerName="nova-metadata-log" containerID="cri-o://209005778ea82dd530f035888c59dab0f25bffa5ed7e73f436880bcf4772002b" gracePeriod=30 Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.163448 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4eef29a4-126c-42cb-93dd-0ea36c59f82d" containerName="nova-metadata-metadata" containerID="cri-o://d0116f167194b3f09c3ac78de100b62099c488807dc733da5f8d73b1ca76dc2a" gracePeriod=30 Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.196249 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.758619 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 11:49:30 crc kubenswrapper[4849]: W1209 11:49:30.760553 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1669bf2d_c24f_46a6_9cdf_1f28689a44b2.slice/crio-8f52839b9ca6d6d3f0d7b6121affd9fbb05cf1941c61e4d7c011dd09dc7889c6 WatchSource:0}: Error finding container 8f52839b9ca6d6d3f0d7b6121affd9fbb05cf1941c61e4d7c011dd09dc7889c6: Status 404 returned error can't find the container with id 8f52839b9ca6d6d3f0d7b6121affd9fbb05cf1941c61e4d7c011dd09dc7889c6 Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.836972 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1669bf2d-c24f-46a6-9cdf-1f28689a44b2","Type":"ContainerStarted","Data":"8f52839b9ca6d6d3f0d7b6121affd9fbb05cf1941c61e4d7c011dd09dc7889c6"} Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.846016 4849 generic.go:334] "Generic (PLEG): container finished" podID="49f8d36e-53c6-4ae6-a088-ee76c48897af" containerID="81fe28242a6913283a58a3558807e4b5a46d9c513891346c8d6c028bb24d95d8" exitCode=143 Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.846168 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"49f8d36e-53c6-4ae6-a088-ee76c48897af","Type":"ContainerDied","Data":"81fe28242a6913283a58a3558807e4b5a46d9c513891346c8d6c028bb24d95d8"} Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.867792 4849 generic.go:334] "Generic (PLEG): container finished" podID="4eef29a4-126c-42cb-93dd-0ea36c59f82d" containerID="d0116f167194b3f09c3ac78de100b62099c488807dc733da5f8d73b1ca76dc2a" exitCode=0 Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.867836 4849 generic.go:334] "Generic (PLEG): container finished" podID="4eef29a4-126c-42cb-93dd-0ea36c59f82d" containerID="209005778ea82dd530f035888c59dab0f25bffa5ed7e73f436880bcf4772002b" exitCode=143 Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.868131 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4eef29a4-126c-42cb-93dd-0ea36c59f82d","Type":"ContainerDied","Data":"d0116f167194b3f09c3ac78de100b62099c488807dc733da5f8d73b1ca76dc2a"} Dec 09 11:49:30 crc kubenswrapper[4849]: I1209 11:49:30.868236 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4eef29a4-126c-42cb-93dd-0ea36c59f82d","Type":"ContainerDied","Data":"209005778ea82dd530f035888c59dab0f25bffa5ed7e73f436880bcf4772002b"} Dec 09 11:49:31 crc kubenswrapper[4849]: E1209 11:49:31.000433 4849 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 11:49:31 crc kubenswrapper[4849]: E1209 11:49:31.007668 4849 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 11:49:31 crc kubenswrapper[4849]: E1209 11:49:31.009020 4849 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 11:49:31 crc kubenswrapper[4849]: E1209 11:49:31.009069 4849 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9a386901-7188-4596-8fa6-d007406d2bbf" containerName="nova-scheduler-scheduler" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.232916 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.273648 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlg2n\" (UniqueName: \"kubernetes.io/projected/4eef29a4-126c-42cb-93dd-0ea36c59f82d-kube-api-access-wlg2n\") pod \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.273932 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-config-data\") pod \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.274488 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eef29a4-126c-42cb-93dd-0ea36c59f82d-logs\") pod \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.274641 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-nova-metadata-tls-certs\") pod \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.274805 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4eef29a4-126c-42cb-93dd-0ea36c59f82d-logs" (OuterVolumeSpecName: "logs") pod "4eef29a4-126c-42cb-93dd-0ea36c59f82d" (UID: "4eef29a4-126c-42cb-93dd-0ea36c59f82d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.274923 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-combined-ca-bundle\") pod \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\" (UID: \"4eef29a4-126c-42cb-93dd-0ea36c59f82d\") " Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.275315 4849 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eef29a4-126c-42cb-93dd-0ea36c59f82d-logs\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.279810 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eef29a4-126c-42cb-93dd-0ea36c59f82d-kube-api-access-wlg2n" (OuterVolumeSpecName: "kube-api-access-wlg2n") pod "4eef29a4-126c-42cb-93dd-0ea36c59f82d" (UID: "4eef29a4-126c-42cb-93dd-0ea36c59f82d"). InnerVolumeSpecName "kube-api-access-wlg2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.369468 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4eef29a4-126c-42cb-93dd-0ea36c59f82d" (UID: "4eef29a4-126c-42cb-93dd-0ea36c59f82d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.376806 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.376835 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlg2n\" (UniqueName: \"kubernetes.io/projected/4eef29a4-126c-42cb-93dd-0ea36c59f82d-kube-api-access-wlg2n\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.379530 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4eef29a4-126c-42cb-93dd-0ea36c59f82d" (UID: "4eef29a4-126c-42cb-93dd-0ea36c59f82d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.409584 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-config-data" (OuterVolumeSpecName: "config-data") pod "4eef29a4-126c-42cb-93dd-0ea36c59f82d" (UID: "4eef29a4-126c-42cb-93dd-0ea36c59f82d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.478511 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.478545 4849 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eef29a4-126c-42cb-93dd-0ea36c59f82d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.878912 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4eef29a4-126c-42cb-93dd-0ea36c59f82d","Type":"ContainerDied","Data":"9b47b62a9f9183da8bb7553a3612a0ca6a773a58e6eba069762d725ed2c60915"} Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.878923 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.878977 4849 scope.go:117] "RemoveContainer" containerID="d0116f167194b3f09c3ac78de100b62099c488807dc733da5f8d73b1ca76dc2a" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.882156 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1669bf2d-c24f-46a6-9cdf-1f28689a44b2","Type":"ContainerStarted","Data":"4d3b735709a7fd3b9ea2e814fe84c7855f55224d477833ecb44316c0eda401b0"} Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.882944 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.918714 4849 scope.go:117] "RemoveContainer" containerID="209005778ea82dd530f035888c59dab0f25bffa5ed7e73f436880bcf4772002b" Dec 09 11:49:31 crc kubenswrapper[4849]: I1209 11:49:31.982969 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.982946369 podStartE2EDuration="2.982946369s" podCreationTimestamp="2025-12-09 11:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:49:31.919106176 +0000 UTC m=+1354.458990492" watchObservedRunningTime="2025-12-09 11:49:31.982946369 +0000 UTC m=+1354.522830695" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.017618 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.029384 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.044950 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:49:32 crc kubenswrapper[4849]: E1209 11:49:32.045679 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eef29a4-126c-42cb-93dd-0ea36c59f82d" containerName="nova-metadata-metadata" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.045808 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eef29a4-126c-42cb-93dd-0ea36c59f82d" containerName="nova-metadata-metadata" Dec 09 11:49:32 crc kubenswrapper[4849]: E1209 11:49:32.045907 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eef29a4-126c-42cb-93dd-0ea36c59f82d" containerName="nova-metadata-log" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.045994 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eef29a4-126c-42cb-93dd-0ea36c59f82d" containerName="nova-metadata-log" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.046302 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eef29a4-126c-42cb-93dd-0ea36c59f82d" containerName="nova-metadata-metadata" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.046449 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eef29a4-126c-42cb-93dd-0ea36c59f82d" containerName="nova-metadata-log" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.048175 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.052630 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.054766 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.067318 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.095862 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-config-data\") pod \"nova-metadata-0\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.095900 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55vp4\" (UniqueName: \"kubernetes.io/projected/8d03d805-9a76-4af1-9618-9664e506474a-kube-api-access-55vp4\") pod \"nova-metadata-0\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.095986 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.096351 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d03d805-9a76-4af1-9618-9664e506474a-logs\") pod \"nova-metadata-0\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.096653 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.198274 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.198461 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-config-data\") pod \"nova-metadata-0\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.198484 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55vp4\" (UniqueName: \"kubernetes.io/projected/8d03d805-9a76-4af1-9618-9664e506474a-kube-api-access-55vp4\") pod \"nova-metadata-0\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.198513 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.198532 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d03d805-9a76-4af1-9618-9664e506474a-logs\") pod \"nova-metadata-0\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.198904 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d03d805-9a76-4af1-9618-9664e506474a-logs\") pod \"nova-metadata-0\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.203227 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-config-data\") pod \"nova-metadata-0\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.203344 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.203944 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.224144 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55vp4\" (UniqueName: \"kubernetes.io/projected/8d03d805-9a76-4af1-9618-9664e506474a-kube-api-access-55vp4\") pod \"nova-metadata-0\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.386380 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.576027 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eef29a4-126c-42cb-93dd-0ea36c59f82d" path="/var/lib/kubelet/pods/4eef29a4-126c-42cb-93dd-0ea36c59f82d/volumes" Dec 09 11:49:32 crc kubenswrapper[4849]: W1209 11:49:32.944977 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d03d805_9a76_4af1_9618_9664e506474a.slice/crio-797f84fcaf74facce9e3e00f722b82d02a22e57b03996d2c8b7ba3db220920f5 WatchSource:0}: Error finding container 797f84fcaf74facce9e3e00f722b82d02a22e57b03996d2c8b7ba3db220920f5: Status 404 returned error can't find the container with id 797f84fcaf74facce9e3e00f722b82d02a22e57b03996d2c8b7ba3db220920f5 Dec 09 11:49:32 crc kubenswrapper[4849]: I1209 11:49:32.952551 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.601085 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.670908 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssq9m\" (UniqueName: \"kubernetes.io/projected/9a386901-7188-4596-8fa6-d007406d2bbf-kube-api-access-ssq9m\") pod \"9a386901-7188-4596-8fa6-d007406d2bbf\" (UID: \"9a386901-7188-4596-8fa6-d007406d2bbf\") " Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.671030 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a386901-7188-4596-8fa6-d007406d2bbf-combined-ca-bundle\") pod \"9a386901-7188-4596-8fa6-d007406d2bbf\" (UID: \"9a386901-7188-4596-8fa6-d007406d2bbf\") " Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.671099 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a386901-7188-4596-8fa6-d007406d2bbf-config-data\") pod \"9a386901-7188-4596-8fa6-d007406d2bbf\" (UID: \"9a386901-7188-4596-8fa6-d007406d2bbf\") " Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.679121 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a386901-7188-4596-8fa6-d007406d2bbf-kube-api-access-ssq9m" (OuterVolumeSpecName: "kube-api-access-ssq9m") pod "9a386901-7188-4596-8fa6-d007406d2bbf" (UID: "9a386901-7188-4596-8fa6-d007406d2bbf"). InnerVolumeSpecName "kube-api-access-ssq9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.716979 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a386901-7188-4596-8fa6-d007406d2bbf-config-data" (OuterVolumeSpecName: "config-data") pod "9a386901-7188-4596-8fa6-d007406d2bbf" (UID: "9a386901-7188-4596-8fa6-d007406d2bbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.777443 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssq9m\" (UniqueName: \"kubernetes.io/projected/9a386901-7188-4596-8fa6-d007406d2bbf-kube-api-access-ssq9m\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.777524 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a386901-7188-4596-8fa6-d007406d2bbf-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.793606 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a386901-7188-4596-8fa6-d007406d2bbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a386901-7188-4596-8fa6-d007406d2bbf" (UID: "9a386901-7188-4596-8fa6-d007406d2bbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.879804 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a386901-7188-4596-8fa6-d007406d2bbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.910649 4849 generic.go:334] "Generic (PLEG): container finished" podID="49f8d36e-53c6-4ae6-a088-ee76c48897af" containerID="52c856d333364772df959d137d4c905f5f35235af62fefd2ca9cc3e5ddd92ea0" exitCode=0 Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.910713 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"49f8d36e-53c6-4ae6-a088-ee76c48897af","Type":"ContainerDied","Data":"52c856d333364772df959d137d4c905f5f35235af62fefd2ca9cc3e5ddd92ea0"} Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.919613 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d03d805-9a76-4af1-9618-9664e506474a","Type":"ContainerStarted","Data":"021652fa500dc24f8a8779aee3668447d65cd3992b85842618131b78f4a97d0c"} Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.919670 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d03d805-9a76-4af1-9618-9664e506474a","Type":"ContainerStarted","Data":"5a1c8d59a2ea49c0f9b3d9245f0b553838f259bf11b5ed852681155d3459e887"} Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.919689 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d03d805-9a76-4af1-9618-9664e506474a","Type":"ContainerStarted","Data":"797f84fcaf74facce9e3e00f722b82d02a22e57b03996d2c8b7ba3db220920f5"} Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.926640 4849 generic.go:334] "Generic (PLEG): container finished" podID="9a386901-7188-4596-8fa6-d007406d2bbf" containerID="7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd" exitCode=0 Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.927588 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.928176 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9a386901-7188-4596-8fa6-d007406d2bbf","Type":"ContainerDied","Data":"7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd"} Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.928238 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9a386901-7188-4596-8fa6-d007406d2bbf","Type":"ContainerDied","Data":"94534423595cbbd006ba02a43eef4dd226b6ab343c25d13fef9fb2f986931b33"} Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.928260 4849 scope.go:117] "RemoveContainer" containerID="7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd" Dec 09 11:49:33 crc kubenswrapper[4849]: I1209 11:49:33.972728 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.972709055 podStartE2EDuration="2.972709055s" podCreationTimestamp="2025-12-09 11:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:49:33.966758847 +0000 UTC m=+1356.506643173" watchObservedRunningTime="2025-12-09 11:49:33.972709055 +0000 UTC m=+1356.512593371" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.012001 4849 scope.go:117] "RemoveContainer" containerID="7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.012099 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 11:49:34 crc kubenswrapper[4849]: E1209 11:49:34.021785 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd\": container with ID starting with 7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd not found: ID does not exist" containerID="7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.021821 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd"} err="failed to get container status \"7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd\": rpc error: code = NotFound desc = could not find container \"7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd\": container with ID starting with 7c7c772907fe7eeaa8f19ffe6bc3e0fd23336c5f2edb170fcb8a9c93b8108bdd not found: ID does not exist" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.031697 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.066622 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 11:49:34 crc kubenswrapper[4849]: E1209 11:49:34.067138 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a386901-7188-4596-8fa6-d007406d2bbf" containerName="nova-scheduler-scheduler" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.067156 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a386901-7188-4596-8fa6-d007406d2bbf" containerName="nova-scheduler-scheduler" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.067376 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a386901-7188-4596-8fa6-d007406d2bbf" containerName="nova-scheduler-scheduler" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.068087 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.070980 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.075282 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.190282 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c28cc6-34bf-4e88-9468-13fed8dbd43e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"22c28cc6-34bf-4e88-9468-13fed8dbd43e\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.190401 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtmgn\" (UniqueName: \"kubernetes.io/projected/22c28cc6-34bf-4e88-9468-13fed8dbd43e-kube-api-access-dtmgn\") pod \"nova-scheduler-0\" (UID: \"22c28cc6-34bf-4e88-9468-13fed8dbd43e\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.190832 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22c28cc6-34bf-4e88-9468-13fed8dbd43e-config-data\") pod \"nova-scheduler-0\" (UID: \"22c28cc6-34bf-4e88-9468-13fed8dbd43e\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.293077 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c28cc6-34bf-4e88-9468-13fed8dbd43e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"22c28cc6-34bf-4e88-9468-13fed8dbd43e\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.293144 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtmgn\" (UniqueName: \"kubernetes.io/projected/22c28cc6-34bf-4e88-9468-13fed8dbd43e-kube-api-access-dtmgn\") pod \"nova-scheduler-0\" (UID: \"22c28cc6-34bf-4e88-9468-13fed8dbd43e\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.293194 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22c28cc6-34bf-4e88-9468-13fed8dbd43e-config-data\") pod \"nova-scheduler-0\" (UID: \"22c28cc6-34bf-4e88-9468-13fed8dbd43e\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.301604 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22c28cc6-34bf-4e88-9468-13fed8dbd43e-config-data\") pod \"nova-scheduler-0\" (UID: \"22c28cc6-34bf-4e88-9468-13fed8dbd43e\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.307640 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c28cc6-34bf-4e88-9468-13fed8dbd43e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"22c28cc6-34bf-4e88-9468-13fed8dbd43e\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.312078 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtmgn\" (UniqueName: \"kubernetes.io/projected/22c28cc6-34bf-4e88-9468-13fed8dbd43e-kube-api-access-dtmgn\") pod \"nova-scheduler-0\" (UID: \"22c28cc6-34bf-4e88-9468-13fed8dbd43e\") " pod="openstack/nova-scheduler-0" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.390908 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.402127 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.497276 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49f8d36e-53c6-4ae6-a088-ee76c48897af-combined-ca-bundle\") pod \"49f8d36e-53c6-4ae6-a088-ee76c48897af\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.497720 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qknkn\" (UniqueName: \"kubernetes.io/projected/49f8d36e-53c6-4ae6-a088-ee76c48897af-kube-api-access-qknkn\") pod \"49f8d36e-53c6-4ae6-a088-ee76c48897af\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.497778 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49f8d36e-53c6-4ae6-a088-ee76c48897af-logs\") pod \"49f8d36e-53c6-4ae6-a088-ee76c48897af\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.497826 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49f8d36e-53c6-4ae6-a088-ee76c48897af-config-data\") pod \"49f8d36e-53c6-4ae6-a088-ee76c48897af\" (UID: \"49f8d36e-53c6-4ae6-a088-ee76c48897af\") " Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.498386 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f8d36e-53c6-4ae6-a088-ee76c48897af-logs" (OuterVolumeSpecName: "logs") pod "49f8d36e-53c6-4ae6-a088-ee76c48897af" (UID: "49f8d36e-53c6-4ae6-a088-ee76c48897af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.504108 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f8d36e-53c6-4ae6-a088-ee76c48897af-kube-api-access-qknkn" (OuterVolumeSpecName: "kube-api-access-qknkn") pod "49f8d36e-53c6-4ae6-a088-ee76c48897af" (UID: "49f8d36e-53c6-4ae6-a088-ee76c48897af"). InnerVolumeSpecName "kube-api-access-qknkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.529946 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f8d36e-53c6-4ae6-a088-ee76c48897af-config-data" (OuterVolumeSpecName: "config-data") pod "49f8d36e-53c6-4ae6-a088-ee76c48897af" (UID: "49f8d36e-53c6-4ae6-a088-ee76c48897af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.536834 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f8d36e-53c6-4ae6-a088-ee76c48897af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49f8d36e-53c6-4ae6-a088-ee76c48897af" (UID: "49f8d36e-53c6-4ae6-a088-ee76c48897af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.556294 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a386901-7188-4596-8fa6-d007406d2bbf" path="/var/lib/kubelet/pods/9a386901-7188-4596-8fa6-d007406d2bbf/volumes" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.599578 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49f8d36e-53c6-4ae6-a088-ee76c48897af-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.599632 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49f8d36e-53c6-4ae6-a088-ee76c48897af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.599644 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qknkn\" (UniqueName: \"kubernetes.io/projected/49f8d36e-53c6-4ae6-a088-ee76c48897af-kube-api-access-qknkn\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.599654 4849 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49f8d36e-53c6-4ae6-a088-ee76c48897af-logs\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.950782 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.951193 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"49f8d36e-53c6-4ae6-a088-ee76c48897af","Type":"ContainerDied","Data":"fd6982179f96fbd970c1bc7937e5f973e5cf17cec2ac8d2c8532d1d3895982a1"} Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.951225 4849 scope.go:117] "RemoveContainer" containerID="52c856d333364772df959d137d4c905f5f35235af62fefd2ca9cc3e5ddd92ea0" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.977446 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.983331 4849 scope.go:117] "RemoveContainer" containerID="81fe28242a6913283a58a3558807e4b5a46d9c513891346c8d6c028bb24d95d8" Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.985100 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:49:34 crc kubenswrapper[4849]: I1209 11:49:34.993272 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:49:34 crc kubenswrapper[4849]: W1209 11:49:34.993716 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22c28cc6_34bf_4e88_9468_13fed8dbd43e.slice/crio-9fc38e17f6893bb91884301a474c4fd12125e40c5e16061c01aa06788364f16f WatchSource:0}: Error finding container 9fc38e17f6893bb91884301a474c4fd12125e40c5e16061c01aa06788364f16f: Status 404 returned error can't find the container with id 9fc38e17f6893bb91884301a474c4fd12125e40c5e16061c01aa06788364f16f Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.050315 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 11:49:35 crc kubenswrapper[4849]: E1209 11:49:35.050813 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f8d36e-53c6-4ae6-a088-ee76c48897af" containerName="nova-api-api" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.050827 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f8d36e-53c6-4ae6-a088-ee76c48897af" containerName="nova-api-api" Dec 09 11:49:35 crc kubenswrapper[4849]: E1209 11:49:35.050843 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f8d36e-53c6-4ae6-a088-ee76c48897af" containerName="nova-api-log" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.050849 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f8d36e-53c6-4ae6-a088-ee76c48897af" containerName="nova-api-log" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.051024 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f8d36e-53c6-4ae6-a088-ee76c48897af" containerName="nova-api-api" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.051042 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f8d36e-53c6-4ae6-a088-ee76c48897af" containerName="nova-api-log" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.052034 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.063571 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.098627 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.232206 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f397cb53-5c12-40fe-99dc-3373e5e76539-config-data\") pod \"nova-api-0\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " pod="openstack/nova-api-0" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.232298 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f397cb53-5c12-40fe-99dc-3373e5e76539-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " pod="openstack/nova-api-0" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.232351 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6bk4\" (UniqueName: \"kubernetes.io/projected/f397cb53-5c12-40fe-99dc-3373e5e76539-kube-api-access-x6bk4\") pod \"nova-api-0\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " pod="openstack/nova-api-0" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.232388 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f397cb53-5c12-40fe-99dc-3373e5e76539-logs\") pod \"nova-api-0\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " pod="openstack/nova-api-0" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.334787 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f397cb53-5c12-40fe-99dc-3373e5e76539-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " pod="openstack/nova-api-0" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.334879 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6bk4\" (UniqueName: \"kubernetes.io/projected/f397cb53-5c12-40fe-99dc-3373e5e76539-kube-api-access-x6bk4\") pod \"nova-api-0\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " pod="openstack/nova-api-0" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.334931 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f397cb53-5c12-40fe-99dc-3373e5e76539-logs\") pod \"nova-api-0\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " pod="openstack/nova-api-0" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.334985 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f397cb53-5c12-40fe-99dc-3373e5e76539-config-data\") pod \"nova-api-0\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " pod="openstack/nova-api-0" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.335918 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f397cb53-5c12-40fe-99dc-3373e5e76539-logs\") pod \"nova-api-0\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " pod="openstack/nova-api-0" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.340111 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f397cb53-5c12-40fe-99dc-3373e5e76539-config-data\") pod \"nova-api-0\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " pod="openstack/nova-api-0" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.345125 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f397cb53-5c12-40fe-99dc-3373e5e76539-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " pod="openstack/nova-api-0" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.363865 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6bk4\" (UniqueName: \"kubernetes.io/projected/f397cb53-5c12-40fe-99dc-3373e5e76539-kube-api-access-x6bk4\") pod \"nova-api-0\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " pod="openstack/nova-api-0" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.394002 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.924338 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:49:35 crc kubenswrapper[4849]: I1209 11:49:35.985580 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f397cb53-5c12-40fe-99dc-3373e5e76539","Type":"ContainerStarted","Data":"1ecf1648a4cb2c3e5b901b3f2c609a2ff837c435a1e314aae7dc56d35106d7f2"} Dec 09 11:49:36 crc kubenswrapper[4849]: I1209 11:49:36.009247 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22c28cc6-34bf-4e88-9468-13fed8dbd43e","Type":"ContainerStarted","Data":"8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca"} Dec 09 11:49:36 crc kubenswrapper[4849]: I1209 11:49:36.009301 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22c28cc6-34bf-4e88-9468-13fed8dbd43e","Type":"ContainerStarted","Data":"9fc38e17f6893bb91884301a474c4fd12125e40c5e16061c01aa06788364f16f"} Dec 09 11:49:36 crc kubenswrapper[4849]: I1209 11:49:36.553525 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f8d36e-53c6-4ae6-a088-ee76c48897af" path="/var/lib/kubelet/pods/49f8d36e-53c6-4ae6-a088-ee76c48897af/volumes" Dec 09 11:49:37 crc kubenswrapper[4849]: I1209 11:49:37.021879 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f397cb53-5c12-40fe-99dc-3373e5e76539","Type":"ContainerStarted","Data":"f71df668282cfa3d2af167b4c2f82de7d9efb43c8fb5836ed8672139e99305db"} Dec 09 11:49:37 crc kubenswrapper[4849]: I1209 11:49:37.021971 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f397cb53-5c12-40fe-99dc-3373e5e76539","Type":"ContainerStarted","Data":"04919649ac2a09f3b79dfc620ee19c1f6dc2a2ef7f9c6506ddefe82935c89e99"} Dec 09 11:49:37 crc kubenswrapper[4849]: I1209 11:49:37.042788 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.042767662 podStartE2EDuration="4.042767662s" podCreationTimestamp="2025-12-09 11:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:49:36.062790245 +0000 UTC m=+1358.602674561" watchObservedRunningTime="2025-12-09 11:49:37.042767662 +0000 UTC m=+1359.582651998" Dec 09 11:49:37 crc kubenswrapper[4849]: I1209 11:49:37.048190 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.048177838 podStartE2EDuration="3.048177838s" podCreationTimestamp="2025-12-09 11:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:49:37.039670415 +0000 UTC m=+1359.579554731" watchObservedRunningTime="2025-12-09 11:49:37.048177838 +0000 UTC m=+1359.588062154" Dec 09 11:49:37 crc kubenswrapper[4849]: I1209 11:49:37.386688 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 11:49:37 crc kubenswrapper[4849]: I1209 11:49:37.386766 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 11:49:39 crc kubenswrapper[4849]: I1209 11:49:39.391004 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 11:49:40 crc kubenswrapper[4849]: I1209 11:49:40.233178 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 09 11:49:42 crc kubenswrapper[4849]: I1209 11:49:42.387287 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 11:49:42 crc kubenswrapper[4849]: I1209 11:49:42.387658 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 11:49:43 crc kubenswrapper[4849]: I1209 11:49:43.397665 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8d03d805-9a76-4af1-9618-9664e506474a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.175:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 11:49:43 crc kubenswrapper[4849]: I1209 11:49:43.398251 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8d03d805-9a76-4af1-9618-9664e506474a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.175:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 11:49:44 crc kubenswrapper[4849]: I1209 11:49:44.391660 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 11:49:44 crc kubenswrapper[4849]: I1209 11:49:44.420132 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 11:49:45 crc kubenswrapper[4849]: I1209 11:49:45.135179 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 11:49:45 crc kubenswrapper[4849]: I1209 11:49:45.395167 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 11:49:45 crc kubenswrapper[4849]: I1209 11:49:45.395528 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 11:49:46 crc kubenswrapper[4849]: I1209 11:49:46.435756 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f397cb53-5c12-40fe-99dc-3373e5e76539" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.177:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 11:49:46 crc kubenswrapper[4849]: I1209 11:49:46.476651 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f397cb53-5c12-40fe-99dc-3373e5e76539" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.177:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 11:49:48 crc kubenswrapper[4849]: I1209 11:49:48.836977 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:48 crc kubenswrapper[4849]: I1209 11:49:48.964802 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/073f7523-bbfd-4875-a17a-f9034464cb01-config-data\") pod \"073f7523-bbfd-4875-a17a-f9034464cb01\" (UID: \"073f7523-bbfd-4875-a17a-f9034464cb01\") " Dec 09 11:49:48 crc kubenswrapper[4849]: I1209 11:49:48.965041 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22srv\" (UniqueName: \"kubernetes.io/projected/073f7523-bbfd-4875-a17a-f9034464cb01-kube-api-access-22srv\") pod \"073f7523-bbfd-4875-a17a-f9034464cb01\" (UID: \"073f7523-bbfd-4875-a17a-f9034464cb01\") " Dec 09 11:49:48 crc kubenswrapper[4849]: I1209 11:49:48.965119 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/073f7523-bbfd-4875-a17a-f9034464cb01-combined-ca-bundle\") pod \"073f7523-bbfd-4875-a17a-f9034464cb01\" (UID: \"073f7523-bbfd-4875-a17a-f9034464cb01\") " Dec 09 11:49:48 crc kubenswrapper[4849]: I1209 11:49:48.971688 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/073f7523-bbfd-4875-a17a-f9034464cb01-kube-api-access-22srv" (OuterVolumeSpecName: "kube-api-access-22srv") pod "073f7523-bbfd-4875-a17a-f9034464cb01" (UID: "073f7523-bbfd-4875-a17a-f9034464cb01"). InnerVolumeSpecName "kube-api-access-22srv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:49:48 crc kubenswrapper[4849]: I1209 11:49:48.991592 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/073f7523-bbfd-4875-a17a-f9034464cb01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "073f7523-bbfd-4875-a17a-f9034464cb01" (UID: "073f7523-bbfd-4875-a17a-f9034464cb01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:48 crc kubenswrapper[4849]: I1209 11:49:48.993588 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/073f7523-bbfd-4875-a17a-f9034464cb01-config-data" (OuterVolumeSpecName: "config-data") pod "073f7523-bbfd-4875-a17a-f9034464cb01" (UID: "073f7523-bbfd-4875-a17a-f9034464cb01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.067918 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22srv\" (UniqueName: \"kubernetes.io/projected/073f7523-bbfd-4875-a17a-f9034464cb01-kube-api-access-22srv\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.067950 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/073f7523-bbfd-4875-a17a-f9034464cb01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.067977 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/073f7523-bbfd-4875-a17a-f9034464cb01-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.142689 4849 generic.go:334] "Generic (PLEG): container finished" podID="073f7523-bbfd-4875-a17a-f9034464cb01" containerID="4ab450a931d9e553727d5d193a5a45b3bfbb6e5fb78bdfcd71058bb9e38958ce" exitCode=137 Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.142732 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"073f7523-bbfd-4875-a17a-f9034464cb01","Type":"ContainerDied","Data":"4ab450a931d9e553727d5d193a5a45b3bfbb6e5fb78bdfcd71058bb9e38958ce"} Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.142760 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"073f7523-bbfd-4875-a17a-f9034464cb01","Type":"ContainerDied","Data":"1a64aae5f41aad3a285714c1cfcfb5d30c8be5dc6fc90bcf918e5bcc7ad6923b"} Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.142777 4849 scope.go:117] "RemoveContainer" containerID="4ab450a931d9e553727d5d193a5a45b3bfbb6e5fb78bdfcd71058bb9e38958ce" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.142782 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.162726 4849 scope.go:117] "RemoveContainer" containerID="4ab450a931d9e553727d5d193a5a45b3bfbb6e5fb78bdfcd71058bb9e38958ce" Dec 09 11:49:49 crc kubenswrapper[4849]: E1209 11:49:49.163131 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ab450a931d9e553727d5d193a5a45b3bfbb6e5fb78bdfcd71058bb9e38958ce\": container with ID starting with 4ab450a931d9e553727d5d193a5a45b3bfbb6e5fb78bdfcd71058bb9e38958ce not found: ID does not exist" containerID="4ab450a931d9e553727d5d193a5a45b3bfbb6e5fb78bdfcd71058bb9e38958ce" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.163214 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ab450a931d9e553727d5d193a5a45b3bfbb6e5fb78bdfcd71058bb9e38958ce"} err="failed to get container status \"4ab450a931d9e553727d5d193a5a45b3bfbb6e5fb78bdfcd71058bb9e38958ce\": rpc error: code = NotFound desc = could not find container \"4ab450a931d9e553727d5d193a5a45b3bfbb6e5fb78bdfcd71058bb9e38958ce\": container with ID starting with 4ab450a931d9e553727d5d193a5a45b3bfbb6e5fb78bdfcd71058bb9e38958ce not found: ID does not exist" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.184431 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.191547 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.216260 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 11:49:49 crc kubenswrapper[4849]: E1209 11:49:49.216882 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="073f7523-bbfd-4875-a17a-f9034464cb01" containerName="nova-cell1-novncproxy-novncproxy" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.216932 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="073f7523-bbfd-4875-a17a-f9034464cb01" containerName="nova-cell1-novncproxy-novncproxy" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.217593 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="073f7523-bbfd-4875-a17a-f9034464cb01" containerName="nova-cell1-novncproxy-novncproxy" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.218483 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.220641 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.221847 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.224689 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.241403 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.377253 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/544ee850-6363-4bb9-89d8-c9160c2d850a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"544ee850-6363-4bb9-89d8-c9160c2d850a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.377324 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/544ee850-6363-4bb9-89d8-c9160c2d850a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"544ee850-6363-4bb9-89d8-c9160c2d850a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.377567 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544ee850-6363-4bb9-89d8-c9160c2d850a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"544ee850-6363-4bb9-89d8-c9160c2d850a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.377701 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544ee850-6363-4bb9-89d8-c9160c2d850a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"544ee850-6363-4bb9-89d8-c9160c2d850a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.377779 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdmbb\" (UniqueName: \"kubernetes.io/projected/544ee850-6363-4bb9-89d8-c9160c2d850a-kube-api-access-gdmbb\") pod \"nova-cell1-novncproxy-0\" (UID: \"544ee850-6363-4bb9-89d8-c9160c2d850a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.479051 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544ee850-6363-4bb9-89d8-c9160c2d850a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"544ee850-6363-4bb9-89d8-c9160c2d850a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.479301 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544ee850-6363-4bb9-89d8-c9160c2d850a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"544ee850-6363-4bb9-89d8-c9160c2d850a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.479338 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdmbb\" (UniqueName: \"kubernetes.io/projected/544ee850-6363-4bb9-89d8-c9160c2d850a-kube-api-access-gdmbb\") pod \"nova-cell1-novncproxy-0\" (UID: \"544ee850-6363-4bb9-89d8-c9160c2d850a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.479454 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/544ee850-6363-4bb9-89d8-c9160c2d850a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"544ee850-6363-4bb9-89d8-c9160c2d850a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.479473 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/544ee850-6363-4bb9-89d8-c9160c2d850a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"544ee850-6363-4bb9-89d8-c9160c2d850a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.483443 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/544ee850-6363-4bb9-89d8-c9160c2d850a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"544ee850-6363-4bb9-89d8-c9160c2d850a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.483523 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/544ee850-6363-4bb9-89d8-c9160c2d850a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"544ee850-6363-4bb9-89d8-c9160c2d850a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.484617 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544ee850-6363-4bb9-89d8-c9160c2d850a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"544ee850-6363-4bb9-89d8-c9160c2d850a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.488731 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544ee850-6363-4bb9-89d8-c9160c2d850a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"544ee850-6363-4bb9-89d8-c9160c2d850a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.499709 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdmbb\" (UniqueName: \"kubernetes.io/projected/544ee850-6363-4bb9-89d8-c9160c2d850a-kube-api-access-gdmbb\") pod \"nova-cell1-novncproxy-0\" (UID: \"544ee850-6363-4bb9-89d8-c9160c2d850a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.541301 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:49 crc kubenswrapper[4849]: I1209 11:49:49.970846 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 11:49:50 crc kubenswrapper[4849]: I1209 11:49:50.151637 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"544ee850-6363-4bb9-89d8-c9160c2d850a","Type":"ContainerStarted","Data":"8fc8759385e32b5a62928a318ba7aa1b220e5792429628ba92512fec4a817a9b"} Dec 09 11:49:50 crc kubenswrapper[4849]: I1209 11:49:50.549538 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="073f7523-bbfd-4875-a17a-f9034464cb01" path="/var/lib/kubelet/pods/073f7523-bbfd-4875-a17a-f9034464cb01/volumes" Dec 09 11:49:51 crc kubenswrapper[4849]: I1209 11:49:51.164491 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"544ee850-6363-4bb9-89d8-c9160c2d850a","Type":"ContainerStarted","Data":"ff08cc5e533ccdfeab5edc75e45c9b62b9dd631764d3e787ac41460487df572a"} Dec 09 11:49:51 crc kubenswrapper[4849]: I1209 11:49:51.197833 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.197820285 podStartE2EDuration="2.197820285s" podCreationTimestamp="2025-12-09 11:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:49:51.185535458 +0000 UTC m=+1373.725419784" watchObservedRunningTime="2025-12-09 11:49:51.197820285 +0000 UTC m=+1373.737704601" Dec 09 11:49:52 crc kubenswrapper[4849]: I1209 11:49:52.391951 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 11:49:52 crc kubenswrapper[4849]: I1209 11:49:52.392677 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 11:49:52 crc kubenswrapper[4849]: I1209 11:49:52.402789 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 11:49:52 crc kubenswrapper[4849]: I1209 11:49:52.403175 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 11:49:54 crc kubenswrapper[4849]: I1209 11:49:54.546671 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:55 crc kubenswrapper[4849]: I1209 11:49:55.103934 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 11:49:55 crc kubenswrapper[4849]: I1209 11:49:55.398548 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 11:49:55 crc kubenswrapper[4849]: I1209 11:49:55.399802 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 11:49:55 crc kubenswrapper[4849]: I1209 11:49:55.401317 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 11:49:55 crc kubenswrapper[4849]: I1209 11:49:55.413404 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.203662 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.207757 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.431354 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-zdq6r"] Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.442500 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.458449 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-zdq6r"] Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.512436 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-config\") pod \"dnsmasq-dns-5b856c5697-zdq6r\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.512854 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d2rn\" (UniqueName: \"kubernetes.io/projected/42813931-a611-48f1-930f-97bc3e9cf6ac-kube-api-access-7d2rn\") pod \"dnsmasq-dns-5b856c5697-zdq6r\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.513000 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-zdq6r\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.513116 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-zdq6r\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.513289 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-dns-svc\") pod \"dnsmasq-dns-5b856c5697-zdq6r\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.614681 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-zdq6r\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.614765 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-zdq6r\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.614837 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-dns-svc\") pod \"dnsmasq-dns-5b856c5697-zdq6r\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.614897 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-config\") pod \"dnsmasq-dns-5b856c5697-zdq6r\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.614980 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d2rn\" (UniqueName: \"kubernetes.io/projected/42813931-a611-48f1-930f-97bc3e9cf6ac-kube-api-access-7d2rn\") pod \"dnsmasq-dns-5b856c5697-zdq6r\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.615858 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-zdq6r\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.615903 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-zdq6r\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.616298 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-config\") pod \"dnsmasq-dns-5b856c5697-zdq6r\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.616716 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-dns-svc\") pod \"dnsmasq-dns-5b856c5697-zdq6r\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.650636 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d2rn\" (UniqueName: \"kubernetes.io/projected/42813931-a611-48f1-930f-97bc3e9cf6ac-kube-api-access-7d2rn\") pod \"dnsmasq-dns-5b856c5697-zdq6r\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:56 crc kubenswrapper[4849]: I1209 11:49:56.801623 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:57 crc kubenswrapper[4849]: I1209 11:49:57.376672 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-zdq6r"] Dec 09 11:49:58 crc kubenswrapper[4849]: I1209 11:49:58.228205 4849 generic.go:334] "Generic (PLEG): container finished" podID="42813931-a611-48f1-930f-97bc3e9cf6ac" containerID="97a172885be55de61ac16a029bcad5b8a767c25424c07b6163717ce143ad333f" exitCode=0 Dec 09 11:49:58 crc kubenswrapper[4849]: I1209 11:49:58.229464 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" event={"ID":"42813931-a611-48f1-930f-97bc3e9cf6ac","Type":"ContainerDied","Data":"97a172885be55de61ac16a029bcad5b8a767c25424c07b6163717ce143ad333f"} Dec 09 11:49:58 crc kubenswrapper[4849]: I1209 11:49:58.229494 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" event={"ID":"42813931-a611-48f1-930f-97bc3e9cf6ac","Type":"ContainerStarted","Data":"01cf1f59861b90d124059265a35eb3363d1347ba2074afdcced6d2305ee8bfcb"} Dec 09 11:49:58 crc kubenswrapper[4849]: I1209 11:49:58.679431 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:49:59 crc kubenswrapper[4849]: I1209 11:49:59.221068 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:49:59 crc kubenswrapper[4849]: I1209 11:49:59.221399 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7426fff-9173-428f-949a-270118263742" containerName="ceilometer-central-agent" containerID="cri-o://7e0ba3f404c85cde2a42eaa79a18dbfb54344b4f50cad6c0ab779d098be84472" gracePeriod=30 Dec 09 11:49:59 crc kubenswrapper[4849]: I1209 11:49:59.221443 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7426fff-9173-428f-949a-270118263742" containerName="proxy-httpd" containerID="cri-o://13a79cdb1c11110c1815593c2069411b00d8bfc33908bae64baf1dbc6706d401" gracePeriod=30 Dec 09 11:49:59 crc kubenswrapper[4849]: I1209 11:49:59.221506 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7426fff-9173-428f-949a-270118263742" containerName="sg-core" containerID="cri-o://4b4e06dc563ab993cea69a4df078840ecfb7e9478ca1fda5ea2403f921f388fb" gracePeriod=30 Dec 09 11:49:59 crc kubenswrapper[4849]: I1209 11:49:59.221777 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7426fff-9173-428f-949a-270118263742" containerName="ceilometer-notification-agent" containerID="cri-o://7650cf333ecd17b0e85349652b79249bac3666f7b903e05ccf2e0b389dbf6e32" gracePeriod=30 Dec 09 11:49:59 crc kubenswrapper[4849]: I1209 11:49:59.240799 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f397cb53-5c12-40fe-99dc-3373e5e76539" containerName="nova-api-log" containerID="cri-o://04919649ac2a09f3b79dfc620ee19c1f6dc2a2ef7f9c6506ddefe82935c89e99" gracePeriod=30 Dec 09 11:49:59 crc kubenswrapper[4849]: I1209 11:49:59.242209 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" event={"ID":"42813931-a611-48f1-930f-97bc3e9cf6ac","Type":"ContainerStarted","Data":"ec803d34d2ed0b646b83507ff1d002aa986892bd0808b40e1701de7d68a6eb7f"} Dec 09 11:49:59 crc kubenswrapper[4849]: I1209 11:49:59.242257 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:49:59 crc kubenswrapper[4849]: I1209 11:49:59.242707 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f397cb53-5c12-40fe-99dc-3373e5e76539" containerName="nova-api-api" containerID="cri-o://f71df668282cfa3d2af167b4c2f82de7d9efb43c8fb5836ed8672139e99305db" gracePeriod=30 Dec 09 11:49:59 crc kubenswrapper[4849]: I1209 11:49:59.542266 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:59 crc kubenswrapper[4849]: I1209 11:49:59.560168 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:49:59 crc kubenswrapper[4849]: I1209 11:49:59.578369 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" podStartSLOduration=3.578351199 podStartE2EDuration="3.578351199s" podCreationTimestamp="2025-12-09 11:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:49:59.281774218 +0000 UTC m=+1381.821658534" watchObservedRunningTime="2025-12-09 11:49:59.578351199 +0000 UTC m=+1382.118235515" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.250473 4849 generic.go:334] "Generic (PLEG): container finished" podID="f397cb53-5c12-40fe-99dc-3373e5e76539" containerID="04919649ac2a09f3b79dfc620ee19c1f6dc2a2ef7f9c6506ddefe82935c89e99" exitCode=143 Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.250546 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f397cb53-5c12-40fe-99dc-3373e5e76539","Type":"ContainerDied","Data":"04919649ac2a09f3b79dfc620ee19c1f6dc2a2ef7f9c6506ddefe82935c89e99"} Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.254015 4849 generic.go:334] "Generic (PLEG): container finished" podID="a7426fff-9173-428f-949a-270118263742" containerID="13a79cdb1c11110c1815593c2069411b00d8bfc33908bae64baf1dbc6706d401" exitCode=0 Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.254034 4849 generic.go:334] "Generic (PLEG): container finished" podID="a7426fff-9173-428f-949a-270118263742" containerID="4b4e06dc563ab993cea69a4df078840ecfb7e9478ca1fda5ea2403f921f388fb" exitCode=2 Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.254043 4849 generic.go:334] "Generic (PLEG): container finished" podID="a7426fff-9173-428f-949a-270118263742" containerID="7e0ba3f404c85cde2a42eaa79a18dbfb54344b4f50cad6c0ab779d098be84472" exitCode=0 Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.254093 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7426fff-9173-428f-949a-270118263742","Type":"ContainerDied","Data":"13a79cdb1c11110c1815593c2069411b00d8bfc33908bae64baf1dbc6706d401"} Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.254122 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7426fff-9173-428f-949a-270118263742","Type":"ContainerDied","Data":"4b4e06dc563ab993cea69a4df078840ecfb7e9478ca1fda5ea2403f921f388fb"} Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.254137 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7426fff-9173-428f-949a-270118263742","Type":"ContainerDied","Data":"7e0ba3f404c85cde2a42eaa79a18dbfb54344b4f50cad6c0ab779d098be84472"} Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.270445 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.489723 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-xzqt9"] Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.491246 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.497891 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.501981 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpcsp\" (UniqueName: \"kubernetes.io/projected/b8a2e163-5f9e-463e-baba-5dff706bbdd4-kube-api-access-qpcsp\") pod \"nova-cell1-cell-mapping-xzqt9\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.502031 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xzqt9\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.502132 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-scripts\") pod \"nova-cell1-cell-mapping-xzqt9\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.502211 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-config-data\") pod \"nova-cell1-cell-mapping-xzqt9\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.513304 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xzqt9"] Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.518665 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.606271 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpcsp\" (UniqueName: \"kubernetes.io/projected/b8a2e163-5f9e-463e-baba-5dff706bbdd4-kube-api-access-qpcsp\") pod \"nova-cell1-cell-mapping-xzqt9\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.606590 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xzqt9\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.606722 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-scripts\") pod \"nova-cell1-cell-mapping-xzqt9\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.606830 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-config-data\") pod \"nova-cell1-cell-mapping-xzqt9\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.613649 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-scripts\") pod \"nova-cell1-cell-mapping-xzqt9\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.614093 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-config-data\") pod \"nova-cell1-cell-mapping-xzqt9\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.614206 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xzqt9\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.629728 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpcsp\" (UniqueName: \"kubernetes.io/projected/b8a2e163-5f9e-463e-baba-5dff706bbdd4-kube-api-access-qpcsp\") pod \"nova-cell1-cell-mapping-xzqt9\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:00 crc kubenswrapper[4849]: I1209 11:50:00.848723 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.156239 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xzqt9"] Dec 09 11:50:01 crc kubenswrapper[4849]: W1209 11:50:01.159741 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8a2e163_5f9e_463e_baba_5dff706bbdd4.slice/crio-f83d75f750cd5334970e853d8417ccb486c35de22c268dc52d8237442110b9b5 WatchSource:0}: Error finding container f83d75f750cd5334970e853d8417ccb486c35de22c268dc52d8237442110b9b5: Status 404 returned error can't find the container with id f83d75f750cd5334970e853d8417ccb486c35de22c268dc52d8237442110b9b5 Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.285926 4849 generic.go:334] "Generic (PLEG): container finished" podID="a7426fff-9173-428f-949a-270118263742" containerID="7650cf333ecd17b0e85349652b79249bac3666f7b903e05ccf2e0b389dbf6e32" exitCode=0 Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.285969 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7426fff-9173-428f-949a-270118263742","Type":"ContainerDied","Data":"7650cf333ecd17b0e85349652b79249bac3666f7b903e05ccf2e0b389dbf6e32"} Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.289527 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xzqt9" event={"ID":"b8a2e163-5f9e-463e-baba-5dff706bbdd4","Type":"ContainerStarted","Data":"f83d75f750cd5334970e853d8417ccb486c35de22c268dc52d8237442110b9b5"} Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.474633 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.537457 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-scripts\") pod \"a7426fff-9173-428f-949a-270118263742\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.537522 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j58d\" (UniqueName: \"kubernetes.io/projected/a7426fff-9173-428f-949a-270118263742-kube-api-access-2j58d\") pod \"a7426fff-9173-428f-949a-270118263742\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.537634 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-sg-core-conf-yaml\") pod \"a7426fff-9173-428f-949a-270118263742\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.537819 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-config-data\") pod \"a7426fff-9173-428f-949a-270118263742\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.537988 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7426fff-9173-428f-949a-270118263742-log-httpd\") pod \"a7426fff-9173-428f-949a-270118263742\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.538024 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-combined-ca-bundle\") pod \"a7426fff-9173-428f-949a-270118263742\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.538048 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7426fff-9173-428f-949a-270118263742-run-httpd\") pod \"a7426fff-9173-428f-949a-270118263742\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.538110 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-ceilometer-tls-certs\") pod \"a7426fff-9173-428f-949a-270118263742\" (UID: \"a7426fff-9173-428f-949a-270118263742\") " Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.538481 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7426fff-9173-428f-949a-270118263742-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a7426fff-9173-428f-949a-270118263742" (UID: "a7426fff-9173-428f-949a-270118263742"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.538540 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7426fff-9173-428f-949a-270118263742-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a7426fff-9173-428f-949a-270118263742" (UID: "a7426fff-9173-428f-949a-270118263742"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.539200 4849 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7426fff-9173-428f-949a-270118263742-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.539219 4849 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7426fff-9173-428f-949a-270118263742-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.547929 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-scripts" (OuterVolumeSpecName: "scripts") pod "a7426fff-9173-428f-949a-270118263742" (UID: "a7426fff-9173-428f-949a-270118263742"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.555911 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7426fff-9173-428f-949a-270118263742-kube-api-access-2j58d" (OuterVolumeSpecName: "kube-api-access-2j58d") pod "a7426fff-9173-428f-949a-270118263742" (UID: "a7426fff-9173-428f-949a-270118263742"). InnerVolumeSpecName "kube-api-access-2j58d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.589276 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a7426fff-9173-428f-949a-270118263742" (UID: "a7426fff-9173-428f-949a-270118263742"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.634266 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a7426fff-9173-428f-949a-270118263742" (UID: "a7426fff-9173-428f-949a-270118263742"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.642450 4849 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.642657 4849 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.642754 4849 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.642877 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j58d\" (UniqueName: \"kubernetes.io/projected/a7426fff-9173-428f-949a-270118263742-kube-api-access-2j58d\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.644983 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7426fff-9173-428f-949a-270118263742" (UID: "a7426fff-9173-428f-949a-270118263742"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.674951 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-config-data" (OuterVolumeSpecName: "config-data") pod "a7426fff-9173-428f-949a-270118263742" (UID: "a7426fff-9173-428f-949a-270118263742"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.744341 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:01 crc kubenswrapper[4849]: I1209 11:50:01.744374 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7426fff-9173-428f-949a-270118263742-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.299949 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xzqt9" event={"ID":"b8a2e163-5f9e-463e-baba-5dff706bbdd4","Type":"ContainerStarted","Data":"0f76eb5fdaee1b2552caafe756b826f7dce88dcd9ffbd74006b132844821a07b"} Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.339209 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7426fff-9173-428f-949a-270118263742","Type":"ContainerDied","Data":"ea525d04047ca7c023e1cfa3e393b01da20198fab55682bb0910ff4a7c04c13d"} Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.339263 4849 scope.go:117] "RemoveContainer" containerID="13a79cdb1c11110c1815593c2069411b00d8bfc33908bae64baf1dbc6706d401" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.339472 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.342478 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-xzqt9" podStartSLOduration=2.34246622 podStartE2EDuration="2.34246622s" podCreationTimestamp="2025-12-09 11:50:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:50:02.333780954 +0000 UTC m=+1384.873665270" watchObservedRunningTime="2025-12-09 11:50:02.34246622 +0000 UTC m=+1384.882350556" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.390113 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.390707 4849 scope.go:117] "RemoveContainer" containerID="4b4e06dc563ab993cea69a4df078840ecfb7e9478ca1fda5ea2403f921f388fb" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.418265 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.433671 4849 scope.go:117] "RemoveContainer" containerID="7650cf333ecd17b0e85349652b79249bac3666f7b903e05ccf2e0b389dbf6e32" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.445759 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:50:02 crc kubenswrapper[4849]: E1209 11:50:02.446173 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7426fff-9173-428f-949a-270118263742" containerName="proxy-httpd" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.446192 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7426fff-9173-428f-949a-270118263742" containerName="proxy-httpd" Dec 09 11:50:02 crc kubenswrapper[4849]: E1209 11:50:02.446213 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7426fff-9173-428f-949a-270118263742" containerName="ceilometer-notification-agent" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.446220 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7426fff-9173-428f-949a-270118263742" containerName="ceilometer-notification-agent" Dec 09 11:50:02 crc kubenswrapper[4849]: E1209 11:50:02.446244 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7426fff-9173-428f-949a-270118263742" containerName="sg-core" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.446251 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7426fff-9173-428f-949a-270118263742" containerName="sg-core" Dec 09 11:50:02 crc kubenswrapper[4849]: E1209 11:50:02.446264 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7426fff-9173-428f-949a-270118263742" containerName="ceilometer-central-agent" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.446271 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7426fff-9173-428f-949a-270118263742" containerName="ceilometer-central-agent" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.446495 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7426fff-9173-428f-949a-270118263742" containerName="ceilometer-notification-agent" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.446521 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7426fff-9173-428f-949a-270118263742" containerName="sg-core" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.446533 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7426fff-9173-428f-949a-270118263742" containerName="ceilometer-central-agent" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.446547 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7426fff-9173-428f-949a-270118263742" containerName="proxy-httpd" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.448503 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.453664 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.453805 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.454225 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.475035 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.475566 4849 scope.go:117] "RemoveContainer" containerID="7e0ba3f404c85cde2a42eaa79a18dbfb54344b4f50cad6c0ab779d098be84472" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.551363 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7426fff-9173-428f-949a-270118263742" path="/var/lib/kubelet/pods/a7426fff-9173-428f-949a-270118263742/volumes" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.566066 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-scripts\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.566143 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.566176 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-config-data\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.566235 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-run-httpd\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.566273 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n87dv\" (UniqueName: \"kubernetes.io/projected/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-kube-api-access-n87dv\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.566288 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-log-httpd\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.566306 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.566333 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.668478 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-scripts\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.668689 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.668737 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-config-data\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.668807 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-run-httpd\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.669501 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n87dv\" (UniqueName: \"kubernetes.io/projected/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-kube-api-access-n87dv\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.669532 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-log-httpd\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.669559 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.669585 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.669718 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-run-httpd\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.669978 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-log-httpd\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.676212 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.676562 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.676655 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-config-data\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.677171 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.689827 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n87dv\" (UniqueName: \"kubernetes.io/projected/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-kube-api-access-n87dv\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.695953 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4db3e0c4-4bd4-4096-9186-49c7b7d371a7-scripts\") pod \"ceilometer-0\" (UID: \"4db3e0c4-4bd4-4096-9186-49c7b7d371a7\") " pod="openstack/ceilometer-0" Dec 09 11:50:02 crc kubenswrapper[4849]: I1209 11:50:02.824655 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.012987 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.093652 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6bk4\" (UniqueName: \"kubernetes.io/projected/f397cb53-5c12-40fe-99dc-3373e5e76539-kube-api-access-x6bk4\") pod \"f397cb53-5c12-40fe-99dc-3373e5e76539\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.094026 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f397cb53-5c12-40fe-99dc-3373e5e76539-config-data\") pod \"f397cb53-5c12-40fe-99dc-3373e5e76539\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.094116 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f397cb53-5c12-40fe-99dc-3373e5e76539-logs\") pod \"f397cb53-5c12-40fe-99dc-3373e5e76539\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.094148 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f397cb53-5c12-40fe-99dc-3373e5e76539-combined-ca-bundle\") pod \"f397cb53-5c12-40fe-99dc-3373e5e76539\" (UID: \"f397cb53-5c12-40fe-99dc-3373e5e76539\") " Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.095282 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f397cb53-5c12-40fe-99dc-3373e5e76539-logs" (OuterVolumeSpecName: "logs") pod "f397cb53-5c12-40fe-99dc-3373e5e76539" (UID: "f397cb53-5c12-40fe-99dc-3373e5e76539"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.101617 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f397cb53-5c12-40fe-99dc-3373e5e76539-kube-api-access-x6bk4" (OuterVolumeSpecName: "kube-api-access-x6bk4") pod "f397cb53-5c12-40fe-99dc-3373e5e76539" (UID: "f397cb53-5c12-40fe-99dc-3373e5e76539"). InnerVolumeSpecName "kube-api-access-x6bk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.153085 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f397cb53-5c12-40fe-99dc-3373e5e76539-config-data" (OuterVolumeSpecName: "config-data") pod "f397cb53-5c12-40fe-99dc-3373e5e76539" (UID: "f397cb53-5c12-40fe-99dc-3373e5e76539"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.153746 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f397cb53-5c12-40fe-99dc-3373e5e76539-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f397cb53-5c12-40fe-99dc-3373e5e76539" (UID: "f397cb53-5c12-40fe-99dc-3373e5e76539"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.195801 4849 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f397cb53-5c12-40fe-99dc-3373e5e76539-logs\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.195841 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f397cb53-5c12-40fe-99dc-3373e5e76539-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.195858 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6bk4\" (UniqueName: \"kubernetes.io/projected/f397cb53-5c12-40fe-99dc-3373e5e76539-kube-api-access-x6bk4\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.195872 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f397cb53-5c12-40fe-99dc-3373e5e76539-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.379276 4849 generic.go:334] "Generic (PLEG): container finished" podID="f397cb53-5c12-40fe-99dc-3373e5e76539" containerID="f71df668282cfa3d2af167b4c2f82de7d9efb43c8fb5836ed8672139e99305db" exitCode=0 Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.388558 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.391646 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f397cb53-5c12-40fe-99dc-3373e5e76539","Type":"ContainerDied","Data":"f71df668282cfa3d2af167b4c2f82de7d9efb43c8fb5836ed8672139e99305db"} Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.391711 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f397cb53-5c12-40fe-99dc-3373e5e76539","Type":"ContainerDied","Data":"1ecf1648a4cb2c3e5b901b3f2c609a2ff837c435a1e314aae7dc56d35106d7f2"} Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.391730 4849 scope.go:117] "RemoveContainer" containerID="f71df668282cfa3d2af167b4c2f82de7d9efb43c8fb5836ed8672139e99305db" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.451569 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.476361 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.527808 4849 scope.go:117] "RemoveContainer" containerID="04919649ac2a09f3b79dfc620ee19c1f6dc2a2ef7f9c6506ddefe82935c89e99" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.551018 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 11:50:03 crc kubenswrapper[4849]: E1209 11:50:03.552274 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f397cb53-5c12-40fe-99dc-3373e5e76539" containerName="nova-api-log" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.552304 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f397cb53-5c12-40fe-99dc-3373e5e76539" containerName="nova-api-log" Dec 09 11:50:03 crc kubenswrapper[4849]: E1209 11:50:03.552320 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f397cb53-5c12-40fe-99dc-3373e5e76539" containerName="nova-api-api" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.552328 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="f397cb53-5c12-40fe-99dc-3373e5e76539" containerName="nova-api-api" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.552806 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f397cb53-5c12-40fe-99dc-3373e5e76539" containerName="nova-api-api" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.552825 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="f397cb53-5c12-40fe-99dc-3373e5e76539" containerName="nova-api-log" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.554805 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.558191 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.559466 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.563812 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.591994 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.592969 4849 scope.go:117] "RemoveContainer" containerID="f71df668282cfa3d2af167b4c2f82de7d9efb43c8fb5836ed8672139e99305db" Dec 09 11:50:03 crc kubenswrapper[4849]: E1209 11:50:03.593539 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f71df668282cfa3d2af167b4c2f82de7d9efb43c8fb5836ed8672139e99305db\": container with ID starting with f71df668282cfa3d2af167b4c2f82de7d9efb43c8fb5836ed8672139e99305db not found: ID does not exist" containerID="f71df668282cfa3d2af167b4c2f82de7d9efb43c8fb5836ed8672139e99305db" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.593578 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71df668282cfa3d2af167b4c2f82de7d9efb43c8fb5836ed8672139e99305db"} err="failed to get container status \"f71df668282cfa3d2af167b4c2f82de7d9efb43c8fb5836ed8672139e99305db\": rpc error: code = NotFound desc = could not find container \"f71df668282cfa3d2af167b4c2f82de7d9efb43c8fb5836ed8672139e99305db\": container with ID starting with f71df668282cfa3d2af167b4c2f82de7d9efb43c8fb5836ed8672139e99305db not found: ID does not exist" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.593605 4849 scope.go:117] "RemoveContainer" containerID="04919649ac2a09f3b79dfc620ee19c1f6dc2a2ef7f9c6506ddefe82935c89e99" Dec 09 11:50:03 crc kubenswrapper[4849]: E1209 11:50:03.594843 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04919649ac2a09f3b79dfc620ee19c1f6dc2a2ef7f9c6506ddefe82935c89e99\": container with ID starting with 04919649ac2a09f3b79dfc620ee19c1f6dc2a2ef7f9c6506ddefe82935c89e99 not found: ID does not exist" containerID="04919649ac2a09f3b79dfc620ee19c1f6dc2a2ef7f9c6506ddefe82935c89e99" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.594901 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04919649ac2a09f3b79dfc620ee19c1f6dc2a2ef7f9c6506ddefe82935c89e99"} err="failed to get container status \"04919649ac2a09f3b79dfc620ee19c1f6dc2a2ef7f9c6506ddefe82935c89e99\": rpc error: code = NotFound desc = could not find container \"04919649ac2a09f3b79dfc620ee19c1f6dc2a2ef7f9c6506ddefe82935c89e99\": container with ID starting with 04919649ac2a09f3b79dfc620ee19c1f6dc2a2ef7f9c6506ddefe82935c89e99 not found: ID does not exist" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.606652 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:50:03 crc kubenswrapper[4849]: E1209 11:50:03.622423 4849 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf397cb53_5c12_40fe_99dc_3373e5e76539.slice/crio-1ecf1648a4cb2c3e5b901b3f2c609a2ff837c435a1e314aae7dc56d35106d7f2\": RecentStats: unable to find data in memory cache]" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.626583 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.626657 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-public-tls-certs\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.626798 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-logs\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.627073 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-config-data\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.627142 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4shm8\" (UniqueName: \"kubernetes.io/projected/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-kube-api-access-4shm8\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.627192 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.728882 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.728948 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-public-tls-certs\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.728998 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-logs\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.729664 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-logs\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.729805 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-config-data\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.729890 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4shm8\" (UniqueName: \"kubernetes.io/projected/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-kube-api-access-4shm8\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.729958 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.733581 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.733597 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-config-data\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.733958 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.734960 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-public-tls-certs\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.754422 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4shm8\" (UniqueName: \"kubernetes.io/projected/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-kube-api-access-4shm8\") pod \"nova-api-0\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " pod="openstack/nova-api-0" Dec 09 11:50:03 crc kubenswrapper[4849]: I1209 11:50:03.901374 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 11:50:04 crc kubenswrapper[4849]: I1209 11:50:04.400032 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4db3e0c4-4bd4-4096-9186-49c7b7d371a7","Type":"ContainerStarted","Data":"777ce6d11c4503664b9150735b2d062c73e6391bc0ad7dcc52b1c8fd15fc5ce3"} Dec 09 11:50:04 crc kubenswrapper[4849]: I1209 11:50:04.400624 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4db3e0c4-4bd4-4096-9186-49c7b7d371a7","Type":"ContainerStarted","Data":"cb76154a4e531dc36e6515b0d962fd5fb7d0d4b92ccf3b793ed3e304ce998a00"} Dec 09 11:50:04 crc kubenswrapper[4849]: I1209 11:50:04.449123 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:50:04 crc kubenswrapper[4849]: I1209 11:50:04.561399 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f397cb53-5c12-40fe-99dc-3373e5e76539" path="/var/lib/kubelet/pods/f397cb53-5c12-40fe-99dc-3373e5e76539/volumes" Dec 09 11:50:05 crc kubenswrapper[4849]: I1209 11:50:05.427926 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4db3e0c4-4bd4-4096-9186-49c7b7d371a7","Type":"ContainerStarted","Data":"7931ebb4084c53118d709ea967648dfea9b8f3c3504086a39e38d7d9b3cb850c"} Dec 09 11:50:05 crc kubenswrapper[4849]: I1209 11:50:05.431514 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5ffec056-9c8f-4f27-a4fe-80a388d83ef1","Type":"ContainerStarted","Data":"7177770b7217d324428d3dd89a1dbabaeb09db959aea6760a359cb929e28833a"} Dec 09 11:50:05 crc kubenswrapper[4849]: I1209 11:50:05.431572 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5ffec056-9c8f-4f27-a4fe-80a388d83ef1","Type":"ContainerStarted","Data":"2981d6da1ea0c43594de718ddd4b64b25e84e163a89fbc551ac4cc65238e5f28"} Dec 09 11:50:05 crc kubenswrapper[4849]: I1209 11:50:05.431587 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5ffec056-9c8f-4f27-a4fe-80a388d83ef1","Type":"ContainerStarted","Data":"f9f01450ae895ec3972465f7ed20f0987ffa6acd45467bf1d700d0528f1176a0"} Dec 09 11:50:05 crc kubenswrapper[4849]: I1209 11:50:05.460863 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.460838272 podStartE2EDuration="2.460838272s" podCreationTimestamp="2025-12-09 11:50:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:50:05.45793258 +0000 UTC m=+1387.997816906" watchObservedRunningTime="2025-12-09 11:50:05.460838272 +0000 UTC m=+1388.000722598" Dec 09 11:50:06 crc kubenswrapper[4849]: I1209 11:50:06.444993 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4db3e0c4-4bd4-4096-9186-49c7b7d371a7","Type":"ContainerStarted","Data":"dd9c17b90203319deaf3b49d0c8e55c2e93be148bb5974154d63dceb9d4236b3"} Dec 09 11:50:06 crc kubenswrapper[4849]: I1209 11:50:06.802585 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:50:06 crc kubenswrapper[4849]: I1209 11:50:06.937804 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-xv67s"] Dec 09 11:50:06 crc kubenswrapper[4849]: I1209 11:50:06.938098 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-xv67s" podUID="db40c8de-3699-4c66-be24-cc3f9c55bf6d" containerName="dnsmasq-dns" containerID="cri-o://1d8cbcf4455793d746c47187ebb6f8c13e31141c875be5823b564d7ccb80c254" gracePeriod=10 Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.537259 4849 generic.go:334] "Generic (PLEG): container finished" podID="db40c8de-3699-4c66-be24-cc3f9c55bf6d" containerID="1d8cbcf4455793d746c47187ebb6f8c13e31141c875be5823b564d7ccb80c254" exitCode=0 Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.537643 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-xv67s" event={"ID":"db40c8de-3699-4c66-be24-cc3f9c55bf6d","Type":"ContainerDied","Data":"1d8cbcf4455793d746c47187ebb6f8c13e31141c875be5823b564d7ccb80c254"} Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.566674 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.567187 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4db3e0c4-4bd4-4096-9186-49c7b7d371a7","Type":"ContainerStarted","Data":"ca1e5b143a54595f1df06a2345440126cfed4ddca82f0fec7e55c777ee8d26ec"} Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.568319 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.689571 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.501229654 podStartE2EDuration="5.689400158s" podCreationTimestamp="2025-12-09 11:50:02 +0000 UTC" firstStartedPulling="2025-12-09 11:50:03.551698498 +0000 UTC m=+1386.091582824" lastFinishedPulling="2025-12-09 11:50:06.739869022 +0000 UTC m=+1389.279753328" observedRunningTime="2025-12-09 11:50:07.677101211 +0000 UTC m=+1390.216985537" watchObservedRunningTime="2025-12-09 11:50:07.689400158 +0000 UTC m=+1390.229284474" Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.732395 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-dns-svc\") pod \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.732599 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zszk\" (UniqueName: \"kubernetes.io/projected/db40c8de-3699-4c66-be24-cc3f9c55bf6d-kube-api-access-2zszk\") pod \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.732671 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-ovsdbserver-sb\") pod \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.732828 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-ovsdbserver-nb\") pod \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.733512 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-config\") pod \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\" (UID: \"db40c8de-3699-4c66-be24-cc3f9c55bf6d\") " Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.757359 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db40c8de-3699-4c66-be24-cc3f9c55bf6d-kube-api-access-2zszk" (OuterVolumeSpecName: "kube-api-access-2zszk") pod "db40c8de-3699-4c66-be24-cc3f9c55bf6d" (UID: "db40c8de-3699-4c66-be24-cc3f9c55bf6d"). InnerVolumeSpecName "kube-api-access-2zszk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.859654 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zszk\" (UniqueName: \"kubernetes.io/projected/db40c8de-3699-4c66-be24-cc3f9c55bf6d-kube-api-access-2zszk\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.881461 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "db40c8de-3699-4c66-be24-cc3f9c55bf6d" (UID: "db40c8de-3699-4c66-be24-cc3f9c55bf6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.891237 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-config" (OuterVolumeSpecName: "config") pod "db40c8de-3699-4c66-be24-cc3f9c55bf6d" (UID: "db40c8de-3699-4c66-be24-cc3f9c55bf6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.893069 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db40c8de-3699-4c66-be24-cc3f9c55bf6d" (UID: "db40c8de-3699-4c66-be24-cc3f9c55bf6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.934643 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "db40c8de-3699-4c66-be24-cc3f9c55bf6d" (UID: "db40c8de-3699-4c66-be24-cc3f9c55bf6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.961755 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.961791 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.961800 4849 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:07 crc kubenswrapper[4849]: I1209 11:50:07.961809 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db40c8de-3699-4c66-be24-cc3f9c55bf6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:08 crc kubenswrapper[4849]: I1209 11:50:08.579256 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-xv67s" Dec 09 11:50:08 crc kubenswrapper[4849]: I1209 11:50:08.579287 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-xv67s" event={"ID":"db40c8de-3699-4c66-be24-cc3f9c55bf6d","Type":"ContainerDied","Data":"1a81077af902e8e50cf93d79a0c132201342749886d62781f8d777fde4acf919"} Dec 09 11:50:08 crc kubenswrapper[4849]: I1209 11:50:08.579318 4849 scope.go:117] "RemoveContainer" containerID="1d8cbcf4455793d746c47187ebb6f8c13e31141c875be5823b564d7ccb80c254" Dec 09 11:50:08 crc kubenswrapper[4849]: I1209 11:50:08.618570 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-xv67s"] Dec 09 11:50:08 crc kubenswrapper[4849]: I1209 11:50:08.620857 4849 scope.go:117] "RemoveContainer" containerID="294ad97c3b7ac7b22f5626086dfd3083852115760f4ddb967951e1f4eaed6dd4" Dec 09 11:50:08 crc kubenswrapper[4849]: I1209 11:50:08.631763 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-xv67s"] Dec 09 11:50:09 crc kubenswrapper[4849]: I1209 11:50:09.595171 4849 generic.go:334] "Generic (PLEG): container finished" podID="b8a2e163-5f9e-463e-baba-5dff706bbdd4" containerID="0f76eb5fdaee1b2552caafe756b826f7dce88dcd9ffbd74006b132844821a07b" exitCode=0 Dec 09 11:50:09 crc kubenswrapper[4849]: I1209 11:50:09.595249 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xzqt9" event={"ID":"b8a2e163-5f9e-463e-baba-5dff706bbdd4","Type":"ContainerDied","Data":"0f76eb5fdaee1b2552caafe756b826f7dce88dcd9ffbd74006b132844821a07b"} Dec 09 11:50:10 crc kubenswrapper[4849]: I1209 11:50:10.546665 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db40c8de-3699-4c66-be24-cc3f9c55bf6d" path="/var/lib/kubelet/pods/db40c8de-3699-4c66-be24-cc3f9c55bf6d/volumes" Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.024218 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.220066 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-combined-ca-bundle\") pod \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.220483 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpcsp\" (UniqueName: \"kubernetes.io/projected/b8a2e163-5f9e-463e-baba-5dff706bbdd4-kube-api-access-qpcsp\") pod \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.220514 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-config-data\") pod \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.220553 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-scripts\") pod \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\" (UID: \"b8a2e163-5f9e-463e-baba-5dff706bbdd4\") " Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.247061 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a2e163-5f9e-463e-baba-5dff706bbdd4-kube-api-access-qpcsp" (OuterVolumeSpecName: "kube-api-access-qpcsp") pod "b8a2e163-5f9e-463e-baba-5dff706bbdd4" (UID: "b8a2e163-5f9e-463e-baba-5dff706bbdd4"). InnerVolumeSpecName "kube-api-access-qpcsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.247307 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-scripts" (OuterVolumeSpecName: "scripts") pod "b8a2e163-5f9e-463e-baba-5dff706bbdd4" (UID: "b8a2e163-5f9e-463e-baba-5dff706bbdd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.255123 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8a2e163-5f9e-463e-baba-5dff706bbdd4" (UID: "b8a2e163-5f9e-463e-baba-5dff706bbdd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.265627 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-config-data" (OuterVolumeSpecName: "config-data") pod "b8a2e163-5f9e-463e-baba-5dff706bbdd4" (UID: "b8a2e163-5f9e-463e-baba-5dff706bbdd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.323231 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.323281 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpcsp\" (UniqueName: \"kubernetes.io/projected/b8a2e163-5f9e-463e-baba-5dff706bbdd4-kube-api-access-qpcsp\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.323299 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.323310 4849 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8a2e163-5f9e-463e-baba-5dff706bbdd4-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.619713 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xzqt9" event={"ID":"b8a2e163-5f9e-463e-baba-5dff706bbdd4","Type":"ContainerDied","Data":"f83d75f750cd5334970e853d8417ccb486c35de22c268dc52d8237442110b9b5"} Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.619755 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xzqt9" Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.619766 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f83d75f750cd5334970e853d8417ccb486c35de22c268dc52d8237442110b9b5" Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.921856 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.922199 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="22c28cc6-34bf-4e88-9468-13fed8dbd43e" containerName="nova-scheduler-scheduler" containerID="cri-o://8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca" gracePeriod=30 Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.934352 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.934643 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5ffec056-9c8f-4f27-a4fe-80a388d83ef1" containerName="nova-api-log" containerID="cri-o://2981d6da1ea0c43594de718ddd4b64b25e84e163a89fbc551ac4cc65238e5f28" gracePeriod=30 Dec 09 11:50:11 crc kubenswrapper[4849]: I1209 11:50:11.935072 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5ffec056-9c8f-4f27-a4fe-80a388d83ef1" containerName="nova-api-api" containerID="cri-o://7177770b7217d324428d3dd89a1dbabaeb09db959aea6760a359cb929e28833a" gracePeriod=30 Dec 09 11:50:12 crc kubenswrapper[4849]: I1209 11:50:12.051212 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:50:12 crc kubenswrapper[4849]: I1209 11:50:12.051592 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8d03d805-9a76-4af1-9618-9664e506474a" containerName="nova-metadata-log" containerID="cri-o://5a1c8d59a2ea49c0f9b3d9245f0b553838f259bf11b5ed852681155d3459e887" gracePeriod=30 Dec 09 11:50:12 crc kubenswrapper[4849]: I1209 11:50:12.051695 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8d03d805-9a76-4af1-9618-9664e506474a" containerName="nova-metadata-metadata" containerID="cri-o://021652fa500dc24f8a8779aee3668447d65cd3992b85842618131b78f4a97d0c" gracePeriod=30 Dec 09 11:50:12 crc kubenswrapper[4849]: I1209 11:50:12.632024 4849 generic.go:334] "Generic (PLEG): container finished" podID="5ffec056-9c8f-4f27-a4fe-80a388d83ef1" containerID="7177770b7217d324428d3dd89a1dbabaeb09db959aea6760a359cb929e28833a" exitCode=0 Dec 09 11:50:12 crc kubenswrapper[4849]: I1209 11:50:12.632360 4849 generic.go:334] "Generic (PLEG): container finished" podID="5ffec056-9c8f-4f27-a4fe-80a388d83ef1" containerID="2981d6da1ea0c43594de718ddd4b64b25e84e163a89fbc551ac4cc65238e5f28" exitCode=143 Dec 09 11:50:12 crc kubenswrapper[4849]: I1209 11:50:12.632126 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5ffec056-9c8f-4f27-a4fe-80a388d83ef1","Type":"ContainerDied","Data":"7177770b7217d324428d3dd89a1dbabaeb09db959aea6760a359cb929e28833a"} Dec 09 11:50:12 crc kubenswrapper[4849]: I1209 11:50:12.632485 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5ffec056-9c8f-4f27-a4fe-80a388d83ef1","Type":"ContainerDied","Data":"2981d6da1ea0c43594de718ddd4b64b25e84e163a89fbc551ac4cc65238e5f28"} Dec 09 11:50:12 crc kubenswrapper[4849]: I1209 11:50:12.635893 4849 generic.go:334] "Generic (PLEG): container finished" podID="8d03d805-9a76-4af1-9618-9664e506474a" containerID="5a1c8d59a2ea49c0f9b3d9245f0b553838f259bf11b5ed852681155d3459e887" exitCode=143 Dec 09 11:50:12 crc kubenswrapper[4849]: I1209 11:50:12.635918 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d03d805-9a76-4af1-9618-9664e506474a","Type":"ContainerDied","Data":"5a1c8d59a2ea49c0f9b3d9245f0b553838f259bf11b5ed852681155d3459e887"} Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.316764 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.468482 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-public-tls-certs\") pod \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.468833 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4shm8\" (UniqueName: \"kubernetes.io/projected/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-kube-api-access-4shm8\") pod \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.468885 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-logs\") pod \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.468964 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-combined-ca-bundle\") pod \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.469235 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-logs" (OuterVolumeSpecName: "logs") pod "5ffec056-9c8f-4f27-a4fe-80a388d83ef1" (UID: "5ffec056-9c8f-4f27-a4fe-80a388d83ef1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.469270 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-internal-tls-certs\") pod \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.469333 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-config-data\") pod \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\" (UID: \"5ffec056-9c8f-4f27-a4fe-80a388d83ef1\") " Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.469926 4849 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-logs\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.474548 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-kube-api-access-4shm8" (OuterVolumeSpecName: "kube-api-access-4shm8") pod "5ffec056-9c8f-4f27-a4fe-80a388d83ef1" (UID: "5ffec056-9c8f-4f27-a4fe-80a388d83ef1"). InnerVolumeSpecName "kube-api-access-4shm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.505040 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-config-data" (OuterVolumeSpecName: "config-data") pod "5ffec056-9c8f-4f27-a4fe-80a388d83ef1" (UID: "5ffec056-9c8f-4f27-a4fe-80a388d83ef1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.520620 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ffec056-9c8f-4f27-a4fe-80a388d83ef1" (UID: "5ffec056-9c8f-4f27-a4fe-80a388d83ef1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.556232 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5ffec056-9c8f-4f27-a4fe-80a388d83ef1" (UID: "5ffec056-9c8f-4f27-a4fe-80a388d83ef1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.556968 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5ffec056-9c8f-4f27-a4fe-80a388d83ef1" (UID: "5ffec056-9c8f-4f27-a4fe-80a388d83ef1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.571015 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.571069 4849 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.571103 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4shm8\" (UniqueName: \"kubernetes.io/projected/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-kube-api-access-4shm8\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.571114 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.571123 4849 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ffec056-9c8f-4f27-a4fe-80a388d83ef1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.645812 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5ffec056-9c8f-4f27-a4fe-80a388d83ef1","Type":"ContainerDied","Data":"f9f01450ae895ec3972465f7ed20f0987ffa6acd45467bf1d700d0528f1176a0"} Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.645884 4849 scope.go:117] "RemoveContainer" containerID="7177770b7217d324428d3dd89a1dbabaeb09db959aea6760a359cb929e28833a" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.645887 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.677918 4849 scope.go:117] "RemoveContainer" containerID="2981d6da1ea0c43594de718ddd4b64b25e84e163a89fbc551ac4cc65238e5f28" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.711489 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.738490 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.756527 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 11:50:13 crc kubenswrapper[4849]: E1209 11:50:13.756916 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ffec056-9c8f-4f27-a4fe-80a388d83ef1" containerName="nova-api-api" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.756933 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ffec056-9c8f-4f27-a4fe-80a388d83ef1" containerName="nova-api-api" Dec 09 11:50:13 crc kubenswrapper[4849]: E1209 11:50:13.756944 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ffec056-9c8f-4f27-a4fe-80a388d83ef1" containerName="nova-api-log" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.756951 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ffec056-9c8f-4f27-a4fe-80a388d83ef1" containerName="nova-api-log" Dec 09 11:50:13 crc kubenswrapper[4849]: E1209 11:50:13.756970 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db40c8de-3699-4c66-be24-cc3f9c55bf6d" containerName="init" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.756975 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="db40c8de-3699-4c66-be24-cc3f9c55bf6d" containerName="init" Dec 09 11:50:13 crc kubenswrapper[4849]: E1209 11:50:13.756990 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db40c8de-3699-4c66-be24-cc3f9c55bf6d" containerName="dnsmasq-dns" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.756997 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="db40c8de-3699-4c66-be24-cc3f9c55bf6d" containerName="dnsmasq-dns" Dec 09 11:50:13 crc kubenswrapper[4849]: E1209 11:50:13.757021 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a2e163-5f9e-463e-baba-5dff706bbdd4" containerName="nova-manage" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.757027 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a2e163-5f9e-463e-baba-5dff706bbdd4" containerName="nova-manage" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.757183 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ffec056-9c8f-4f27-a4fe-80a388d83ef1" containerName="nova-api-api" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.757198 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="db40c8de-3699-4c66-be24-cc3f9c55bf6d" containerName="dnsmasq-dns" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.757221 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ffec056-9c8f-4f27-a4fe-80a388d83ef1" containerName="nova-api-log" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.757231 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a2e163-5f9e-463e-baba-5dff706bbdd4" containerName="nova-manage" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.761672 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.768746 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.769654 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.770039 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.773617 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.876444 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.876513 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xm86\" (UniqueName: \"kubernetes.io/projected/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-kube-api-access-2xm86\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.876540 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-logs\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.876599 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.876641 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-config-data\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.876825 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-public-tls-certs\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.978470 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.978544 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xm86\" (UniqueName: \"kubernetes.io/projected/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-kube-api-access-2xm86\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.978575 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-logs\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.978640 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.978694 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-config-data\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.978758 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-public-tls-certs\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.979425 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-logs\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.984112 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-public-tls-certs\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.984910 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.985954 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-config-data\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.988763 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:13 crc kubenswrapper[4849]: I1209 11:50:13.997494 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xm86\" (UniqueName: \"kubernetes.io/projected/a98f6160-0455-4d4f-adfa-d01a4a2f1edc-kube-api-access-2xm86\") pod \"nova-api-0\" (UID: \"a98f6160-0455-4d4f-adfa-d01a4a2f1edc\") " pod="openstack/nova-api-0" Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.078792 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 11:50:14 crc kubenswrapper[4849]: E1209 11:50:14.396233 4849 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca is running failed: container process not found" containerID="8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 11:50:14 crc kubenswrapper[4849]: E1209 11:50:14.397025 4849 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca is running failed: container process not found" containerID="8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 11:50:14 crc kubenswrapper[4849]: E1209 11:50:14.397275 4849 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca is running failed: container process not found" containerID="8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 11:50:14 crc kubenswrapper[4849]: E1209 11:50:14.397304 4849 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="22c28cc6-34bf-4e88-9468-13fed8dbd43e" containerName="nova-scheduler-scheduler" Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.556615 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ffec056-9c8f-4f27-a4fe-80a388d83ef1" path="/var/lib/kubelet/pods/5ffec056-9c8f-4f27-a4fe-80a388d83ef1/volumes" Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.621625 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.657107 4849 generic.go:334] "Generic (PLEG): container finished" podID="22c28cc6-34bf-4e88-9468-13fed8dbd43e" containerID="8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca" exitCode=0 Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.657148 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22c28cc6-34bf-4e88-9468-13fed8dbd43e","Type":"ContainerDied","Data":"8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca"} Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.657171 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22c28cc6-34bf-4e88-9468-13fed8dbd43e","Type":"ContainerDied","Data":"9fc38e17f6893bb91884301a474c4fd12125e40c5e16061c01aa06788364f16f"} Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.657176 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.657190 4849 scope.go:117] "RemoveContainer" containerID="8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca" Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.687650 4849 scope.go:117] "RemoveContainer" containerID="8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca" Dec 09 11:50:14 crc kubenswrapper[4849]: E1209 11:50:14.688048 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca\": container with ID starting with 8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca not found: ID does not exist" containerID="8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca" Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.688194 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca"} err="failed to get container status \"8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca\": rpc error: code = NotFound desc = could not find container \"8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca\": container with ID starting with 8e86dc5d4fdc739c00e36deb2bcabd6cee0312a329822afc4147a6a45bdaf6ca not found: ID does not exist" Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.698014 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.796312 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c28cc6-34bf-4e88-9468-13fed8dbd43e-combined-ca-bundle\") pod \"22c28cc6-34bf-4e88-9468-13fed8dbd43e\" (UID: \"22c28cc6-34bf-4e88-9468-13fed8dbd43e\") " Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.796770 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtmgn\" (UniqueName: \"kubernetes.io/projected/22c28cc6-34bf-4e88-9468-13fed8dbd43e-kube-api-access-dtmgn\") pod \"22c28cc6-34bf-4e88-9468-13fed8dbd43e\" (UID: \"22c28cc6-34bf-4e88-9468-13fed8dbd43e\") " Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.797092 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22c28cc6-34bf-4e88-9468-13fed8dbd43e-config-data\") pod \"22c28cc6-34bf-4e88-9468-13fed8dbd43e\" (UID: \"22c28cc6-34bf-4e88-9468-13fed8dbd43e\") " Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.810129 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c28cc6-34bf-4e88-9468-13fed8dbd43e-kube-api-access-dtmgn" (OuterVolumeSpecName: "kube-api-access-dtmgn") pod "22c28cc6-34bf-4e88-9468-13fed8dbd43e" (UID: "22c28cc6-34bf-4e88-9468-13fed8dbd43e"). InnerVolumeSpecName "kube-api-access-dtmgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.834618 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c28cc6-34bf-4e88-9468-13fed8dbd43e-config-data" (OuterVolumeSpecName: "config-data") pod "22c28cc6-34bf-4e88-9468-13fed8dbd43e" (UID: "22c28cc6-34bf-4e88-9468-13fed8dbd43e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.836027 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c28cc6-34bf-4e88-9468-13fed8dbd43e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22c28cc6-34bf-4e88-9468-13fed8dbd43e" (UID: "22c28cc6-34bf-4e88-9468-13fed8dbd43e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.900599 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c28cc6-34bf-4e88-9468-13fed8dbd43e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.900922 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtmgn\" (UniqueName: \"kubernetes.io/projected/22c28cc6-34bf-4e88-9468-13fed8dbd43e-kube-api-access-dtmgn\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:14 crc kubenswrapper[4849]: I1209 11:50:14.900932 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22c28cc6-34bf-4e88-9468-13fed8dbd43e-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.043992 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.069828 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.084488 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 11:50:15 crc kubenswrapper[4849]: E1209 11:50:15.085029 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22c28cc6-34bf-4e88-9468-13fed8dbd43e" containerName="nova-scheduler-scheduler" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.085046 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="22c28cc6-34bf-4e88-9468-13fed8dbd43e" containerName="nova-scheduler-scheduler" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.085307 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="22c28cc6-34bf-4e88-9468-13fed8dbd43e" containerName="nova-scheduler-scheduler" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.086162 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.098462 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.115167 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.127295 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6mfh\" (UniqueName: \"kubernetes.io/projected/c72aad0e-4358-4166-b17b-2114ea10bae1-kube-api-access-j6mfh\") pod \"nova-scheduler-0\" (UID: \"c72aad0e-4358-4166-b17b-2114ea10bae1\") " pod="openstack/nova-scheduler-0" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.127398 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c72aad0e-4358-4166-b17b-2114ea10bae1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c72aad0e-4358-4166-b17b-2114ea10bae1\") " pod="openstack/nova-scheduler-0" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.127464 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c72aad0e-4358-4166-b17b-2114ea10bae1-config-data\") pod \"nova-scheduler-0\" (UID: \"c72aad0e-4358-4166-b17b-2114ea10bae1\") " pod="openstack/nova-scheduler-0" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.239936 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6mfh\" (UniqueName: \"kubernetes.io/projected/c72aad0e-4358-4166-b17b-2114ea10bae1-kube-api-access-j6mfh\") pod \"nova-scheduler-0\" (UID: \"c72aad0e-4358-4166-b17b-2114ea10bae1\") " pod="openstack/nova-scheduler-0" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.240022 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c72aad0e-4358-4166-b17b-2114ea10bae1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c72aad0e-4358-4166-b17b-2114ea10bae1\") " pod="openstack/nova-scheduler-0" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.240066 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c72aad0e-4358-4166-b17b-2114ea10bae1-config-data\") pod \"nova-scheduler-0\" (UID: \"c72aad0e-4358-4166-b17b-2114ea10bae1\") " pod="openstack/nova-scheduler-0" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.257923 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c72aad0e-4358-4166-b17b-2114ea10bae1-config-data\") pod \"nova-scheduler-0\" (UID: \"c72aad0e-4358-4166-b17b-2114ea10bae1\") " pod="openstack/nova-scheduler-0" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.283536 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c72aad0e-4358-4166-b17b-2114ea10bae1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c72aad0e-4358-4166-b17b-2114ea10bae1\") " pod="openstack/nova-scheduler-0" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.310730 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6mfh\" (UniqueName: \"kubernetes.io/projected/c72aad0e-4358-4166-b17b-2114ea10bae1-kube-api-access-j6mfh\") pod \"nova-scheduler-0\" (UID: \"c72aad0e-4358-4166-b17b-2114ea10bae1\") " pod="openstack/nova-scheduler-0" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.435820 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.596735 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8d03d805-9a76-4af1-9618-9664e506474a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.175:8775/\": read tcp 10.217.0.2:54476->10.217.0.175:8775: read: connection reset by peer" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.597626 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8d03d805-9a76-4af1-9618-9664e506474a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.175:8775/\": read tcp 10.217.0.2:54462->10.217.0.175:8775: read: connection reset by peer" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.680173 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a98f6160-0455-4d4f-adfa-d01a4a2f1edc","Type":"ContainerStarted","Data":"043f2a503c11d6a9f08e27c83589cf5d3bd4318c1a5653210f63dfec953ba790"} Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.680217 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a98f6160-0455-4d4f-adfa-d01a4a2f1edc","Type":"ContainerStarted","Data":"ef31d2cd3ee8c667291a27c9b8fd6aca978a13585118f0c5c8a17acd239c9156"} Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.680227 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a98f6160-0455-4d4f-adfa-d01a4a2f1edc","Type":"ContainerStarted","Data":"7ac6a536460c08cca37bca8b57a0596e7049f51201db72a3a73a40ce3a87e1dd"} Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.717310 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.717289193 podStartE2EDuration="2.717289193s" podCreationTimestamp="2025-12-09 11:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:50:15.705458667 +0000 UTC m=+1398.245342983" watchObservedRunningTime="2025-12-09 11:50:15.717289193 +0000 UTC m=+1398.257173509" Dec 09 11:50:15 crc kubenswrapper[4849]: I1209 11:50:15.921032 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 11:50:15 crc kubenswrapper[4849]: W1209 11:50:15.936264 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc72aad0e_4358_4166_b17b_2114ea10bae1.slice/crio-5ecc4993dcc6674decb19906e112143175c1af37100943f9ec0714b8c2c9e979 WatchSource:0}: Error finding container 5ecc4993dcc6674decb19906e112143175c1af37100943f9ec0714b8c2c9e979: Status 404 returned error can't find the container with id 5ecc4993dcc6674decb19906e112143175c1af37100943f9ec0714b8c2c9e979 Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.007926 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.068967 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-config-data\") pod \"8d03d805-9a76-4af1-9618-9664e506474a\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.069031 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d03d805-9a76-4af1-9618-9664e506474a-logs\") pod \"8d03d805-9a76-4af1-9618-9664e506474a\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.069131 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-combined-ca-bundle\") pod \"8d03d805-9a76-4af1-9618-9664e506474a\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.069191 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-nova-metadata-tls-certs\") pod \"8d03d805-9a76-4af1-9618-9664e506474a\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.069222 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55vp4\" (UniqueName: \"kubernetes.io/projected/8d03d805-9a76-4af1-9618-9664e506474a-kube-api-access-55vp4\") pod \"8d03d805-9a76-4af1-9618-9664e506474a\" (UID: \"8d03d805-9a76-4af1-9618-9664e506474a\") " Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.070494 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d03d805-9a76-4af1-9618-9664e506474a-logs" (OuterVolumeSpecName: "logs") pod "8d03d805-9a76-4af1-9618-9664e506474a" (UID: "8d03d805-9a76-4af1-9618-9664e506474a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.079585 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d03d805-9a76-4af1-9618-9664e506474a-kube-api-access-55vp4" (OuterVolumeSpecName: "kube-api-access-55vp4") pod "8d03d805-9a76-4af1-9618-9664e506474a" (UID: "8d03d805-9a76-4af1-9618-9664e506474a"). InnerVolumeSpecName "kube-api-access-55vp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.118346 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-config-data" (OuterVolumeSpecName: "config-data") pod "8d03d805-9a76-4af1-9618-9664e506474a" (UID: "8d03d805-9a76-4af1-9618-9664e506474a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.130268 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d03d805-9a76-4af1-9618-9664e506474a" (UID: "8d03d805-9a76-4af1-9618-9664e506474a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.165625 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8d03d805-9a76-4af1-9618-9664e506474a" (UID: "8d03d805-9a76-4af1-9618-9664e506474a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.170791 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.170823 4849 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d03d805-9a76-4af1-9618-9664e506474a-logs\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.170831 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.170842 4849 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d03d805-9a76-4af1-9618-9664e506474a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.170853 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55vp4\" (UniqueName: \"kubernetes.io/projected/8d03d805-9a76-4af1-9618-9664e506474a-kube-api-access-55vp4\") on node \"crc\" DevicePath \"\"" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.547723 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c28cc6-34bf-4e88-9468-13fed8dbd43e" path="/var/lib/kubelet/pods/22c28cc6-34bf-4e88-9468-13fed8dbd43e/volumes" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.689498 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c72aad0e-4358-4166-b17b-2114ea10bae1","Type":"ContainerStarted","Data":"ea3d25b8e2fa10c9f085854a6c62a54dbe271f6c232d28bf88f633e320ae177f"} Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.689813 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c72aad0e-4358-4166-b17b-2114ea10bae1","Type":"ContainerStarted","Data":"5ecc4993dcc6674decb19906e112143175c1af37100943f9ec0714b8c2c9e979"} Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.691471 4849 generic.go:334] "Generic (PLEG): container finished" podID="8d03d805-9a76-4af1-9618-9664e506474a" containerID="021652fa500dc24f8a8779aee3668447d65cd3992b85842618131b78f4a97d0c" exitCode=0 Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.691533 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.691567 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d03d805-9a76-4af1-9618-9664e506474a","Type":"ContainerDied","Data":"021652fa500dc24f8a8779aee3668447d65cd3992b85842618131b78f4a97d0c"} Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.691586 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d03d805-9a76-4af1-9618-9664e506474a","Type":"ContainerDied","Data":"797f84fcaf74facce9e3e00f722b82d02a22e57b03996d2c8b7ba3db220920f5"} Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.691604 4849 scope.go:117] "RemoveContainer" containerID="021652fa500dc24f8a8779aee3668447d65cd3992b85842618131b78f4a97d0c" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.713914 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.7138935339999999 podStartE2EDuration="1.713893534s" podCreationTimestamp="2025-12-09 11:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:50:16.706091269 +0000 UTC m=+1399.245975585" watchObservedRunningTime="2025-12-09 11:50:16.713893534 +0000 UTC m=+1399.253777840" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.717123 4849 scope.go:117] "RemoveContainer" containerID="5a1c8d59a2ea49c0f9b3d9245f0b553838f259bf11b5ed852681155d3459e887" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.734160 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.737188 4849 scope.go:117] "RemoveContainer" containerID="021652fa500dc24f8a8779aee3668447d65cd3992b85842618131b78f4a97d0c" Dec 09 11:50:16 crc kubenswrapper[4849]: E1209 11:50:16.737701 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"021652fa500dc24f8a8779aee3668447d65cd3992b85842618131b78f4a97d0c\": container with ID starting with 021652fa500dc24f8a8779aee3668447d65cd3992b85842618131b78f4a97d0c not found: ID does not exist" containerID="021652fa500dc24f8a8779aee3668447d65cd3992b85842618131b78f4a97d0c" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.737740 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"021652fa500dc24f8a8779aee3668447d65cd3992b85842618131b78f4a97d0c"} err="failed to get container status \"021652fa500dc24f8a8779aee3668447d65cd3992b85842618131b78f4a97d0c\": rpc error: code = NotFound desc = could not find container \"021652fa500dc24f8a8779aee3668447d65cd3992b85842618131b78f4a97d0c\": container with ID starting with 021652fa500dc24f8a8779aee3668447d65cd3992b85842618131b78f4a97d0c not found: ID does not exist" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.737768 4849 scope.go:117] "RemoveContainer" containerID="5a1c8d59a2ea49c0f9b3d9245f0b553838f259bf11b5ed852681155d3459e887" Dec 09 11:50:16 crc kubenswrapper[4849]: E1209 11:50:16.738153 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a1c8d59a2ea49c0f9b3d9245f0b553838f259bf11b5ed852681155d3459e887\": container with ID starting with 5a1c8d59a2ea49c0f9b3d9245f0b553838f259bf11b5ed852681155d3459e887 not found: ID does not exist" containerID="5a1c8d59a2ea49c0f9b3d9245f0b553838f259bf11b5ed852681155d3459e887" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.738181 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a1c8d59a2ea49c0f9b3d9245f0b553838f259bf11b5ed852681155d3459e887"} err="failed to get container status \"5a1c8d59a2ea49c0f9b3d9245f0b553838f259bf11b5ed852681155d3459e887\": rpc error: code = NotFound desc = could not find container \"5a1c8d59a2ea49c0f9b3d9245f0b553838f259bf11b5ed852681155d3459e887\": container with ID starting with 5a1c8d59a2ea49c0f9b3d9245f0b553838f259bf11b5ed852681155d3459e887 not found: ID does not exist" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.754851 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.768373 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:50:16 crc kubenswrapper[4849]: E1209 11:50:16.768839 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d03d805-9a76-4af1-9618-9664e506474a" containerName="nova-metadata-log" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.768864 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d03d805-9a76-4af1-9618-9664e506474a" containerName="nova-metadata-log" Dec 09 11:50:16 crc kubenswrapper[4849]: E1209 11:50:16.768907 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d03d805-9a76-4af1-9618-9664e506474a" containerName="nova-metadata-metadata" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.768919 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d03d805-9a76-4af1-9618-9664e506474a" containerName="nova-metadata-metadata" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.769126 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d03d805-9a76-4af1-9618-9664e506474a" containerName="nova-metadata-log" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.769152 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d03d805-9a76-4af1-9618-9664e506474a" containerName="nova-metadata-metadata" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.770384 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.772427 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.773016 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.780029 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.886377 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b8f2bbe-7bf5-429e-835d-15cd1e039456-logs\") pod \"nova-metadata-0\" (UID: \"4b8f2bbe-7bf5-429e-835d-15cd1e039456\") " pod="openstack/nova-metadata-0" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.886459 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvn66\" (UniqueName: \"kubernetes.io/projected/4b8f2bbe-7bf5-429e-835d-15cd1e039456-kube-api-access-kvn66\") pod \"nova-metadata-0\" (UID: \"4b8f2bbe-7bf5-429e-835d-15cd1e039456\") " pod="openstack/nova-metadata-0" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.886488 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b8f2bbe-7bf5-429e-835d-15cd1e039456-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4b8f2bbe-7bf5-429e-835d-15cd1e039456\") " pod="openstack/nova-metadata-0" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.886519 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b8f2bbe-7bf5-429e-835d-15cd1e039456-config-data\") pod \"nova-metadata-0\" (UID: \"4b8f2bbe-7bf5-429e-835d-15cd1e039456\") " pod="openstack/nova-metadata-0" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.886548 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b8f2bbe-7bf5-429e-835d-15cd1e039456-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4b8f2bbe-7bf5-429e-835d-15cd1e039456\") " pod="openstack/nova-metadata-0" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.988553 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b8f2bbe-7bf5-429e-835d-15cd1e039456-logs\") pod \"nova-metadata-0\" (UID: \"4b8f2bbe-7bf5-429e-835d-15cd1e039456\") " pod="openstack/nova-metadata-0" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.988620 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvn66\" (UniqueName: \"kubernetes.io/projected/4b8f2bbe-7bf5-429e-835d-15cd1e039456-kube-api-access-kvn66\") pod \"nova-metadata-0\" (UID: \"4b8f2bbe-7bf5-429e-835d-15cd1e039456\") " pod="openstack/nova-metadata-0" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.988649 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b8f2bbe-7bf5-429e-835d-15cd1e039456-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4b8f2bbe-7bf5-429e-835d-15cd1e039456\") " pod="openstack/nova-metadata-0" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.988679 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b8f2bbe-7bf5-429e-835d-15cd1e039456-config-data\") pod \"nova-metadata-0\" (UID: \"4b8f2bbe-7bf5-429e-835d-15cd1e039456\") " pod="openstack/nova-metadata-0" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.988710 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b8f2bbe-7bf5-429e-835d-15cd1e039456-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4b8f2bbe-7bf5-429e-835d-15cd1e039456\") " pod="openstack/nova-metadata-0" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.989067 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b8f2bbe-7bf5-429e-835d-15cd1e039456-logs\") pod \"nova-metadata-0\" (UID: \"4b8f2bbe-7bf5-429e-835d-15cd1e039456\") " pod="openstack/nova-metadata-0" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.994755 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b8f2bbe-7bf5-429e-835d-15cd1e039456-config-data\") pod \"nova-metadata-0\" (UID: \"4b8f2bbe-7bf5-429e-835d-15cd1e039456\") " pod="openstack/nova-metadata-0" Dec 09 11:50:16 crc kubenswrapper[4849]: I1209 11:50:16.995510 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b8f2bbe-7bf5-429e-835d-15cd1e039456-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4b8f2bbe-7bf5-429e-835d-15cd1e039456\") " pod="openstack/nova-metadata-0" Dec 09 11:50:17 crc kubenswrapper[4849]: I1209 11:50:17.003814 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b8f2bbe-7bf5-429e-835d-15cd1e039456-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4b8f2bbe-7bf5-429e-835d-15cd1e039456\") " pod="openstack/nova-metadata-0" Dec 09 11:50:17 crc kubenswrapper[4849]: I1209 11:50:17.009396 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvn66\" (UniqueName: \"kubernetes.io/projected/4b8f2bbe-7bf5-429e-835d-15cd1e039456-kube-api-access-kvn66\") pod \"nova-metadata-0\" (UID: \"4b8f2bbe-7bf5-429e-835d-15cd1e039456\") " pod="openstack/nova-metadata-0" Dec 09 11:50:17 crc kubenswrapper[4849]: I1209 11:50:17.088468 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 11:50:17 crc kubenswrapper[4849]: I1209 11:50:17.546076 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 11:50:17 crc kubenswrapper[4849]: W1209 11:50:17.554663 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b8f2bbe_7bf5_429e_835d_15cd1e039456.slice/crio-447fc712100e4ae914b8125f152309e5ca0f46ebae495fe8d88ad6b3752dfe26 WatchSource:0}: Error finding container 447fc712100e4ae914b8125f152309e5ca0f46ebae495fe8d88ad6b3752dfe26: Status 404 returned error can't find the container with id 447fc712100e4ae914b8125f152309e5ca0f46ebae495fe8d88ad6b3752dfe26 Dec 09 11:50:17 crc kubenswrapper[4849]: I1209 11:50:17.703368 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4b8f2bbe-7bf5-429e-835d-15cd1e039456","Type":"ContainerStarted","Data":"447fc712100e4ae914b8125f152309e5ca0f46ebae495fe8d88ad6b3752dfe26"} Dec 09 11:50:18 crc kubenswrapper[4849]: I1209 11:50:18.551107 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d03d805-9a76-4af1-9618-9664e506474a" path="/var/lib/kubelet/pods/8d03d805-9a76-4af1-9618-9664e506474a/volumes" Dec 09 11:50:18 crc kubenswrapper[4849]: I1209 11:50:18.714606 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4b8f2bbe-7bf5-429e-835d-15cd1e039456","Type":"ContainerStarted","Data":"1ce678b10469fb5961fc3d0e5e25b251396b9c2a9b4db6d3c17288432c3013c9"} Dec 09 11:50:18 crc kubenswrapper[4849]: I1209 11:50:18.714654 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4b8f2bbe-7bf5-429e-835d-15cd1e039456","Type":"ContainerStarted","Data":"2357ea9d04bb9c1d24abf00500ad6ef4b72ba5cc772f2ae752990e92d63b54a4"} Dec 09 11:50:18 crc kubenswrapper[4849]: I1209 11:50:18.745477 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.745455244 podStartE2EDuration="2.745455244s" podCreationTimestamp="2025-12-09 11:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:50:18.732680434 +0000 UTC m=+1401.272564760" watchObservedRunningTime="2025-12-09 11:50:18.745455244 +0000 UTC m=+1401.285339560" Dec 09 11:50:20 crc kubenswrapper[4849]: I1209 11:50:20.435991 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 11:50:22 crc kubenswrapper[4849]: I1209 11:50:22.089014 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 11:50:22 crc kubenswrapper[4849]: I1209 11:50:22.089380 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 11:50:24 crc kubenswrapper[4849]: I1209 11:50:24.081378 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 11:50:24 crc kubenswrapper[4849]: I1209 11:50:24.082163 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 11:50:25 crc kubenswrapper[4849]: I1209 11:50:25.097579 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a98f6160-0455-4d4f-adfa-d01a4a2f1edc" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.183:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 11:50:25 crc kubenswrapper[4849]: I1209 11:50:25.097620 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a98f6160-0455-4d4f-adfa-d01a4a2f1edc" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.183:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 11:50:25 crc kubenswrapper[4849]: I1209 11:50:25.436856 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 11:50:25 crc kubenswrapper[4849]: I1209 11:50:25.465157 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 11:50:25 crc kubenswrapper[4849]: I1209 11:50:25.809915 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 11:50:27 crc kubenswrapper[4849]: I1209 11:50:27.089369 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 11:50:27 crc kubenswrapper[4849]: I1209 11:50:27.090175 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 11:50:28 crc kubenswrapper[4849]: I1209 11:50:28.105657 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4b8f2bbe-7bf5-429e-835d-15cd1e039456" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.185:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 11:50:28 crc kubenswrapper[4849]: I1209 11:50:28.105724 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4b8f2bbe-7bf5-429e-835d-15cd1e039456" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.185:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 11:50:32 crc kubenswrapper[4849]: I1209 11:50:32.833956 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 11:50:34 crc kubenswrapper[4849]: I1209 11:50:34.088785 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 11:50:34 crc kubenswrapper[4849]: I1209 11:50:34.089960 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 11:50:34 crc kubenswrapper[4849]: I1209 11:50:34.095301 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 11:50:34 crc kubenswrapper[4849]: I1209 11:50:34.096003 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 11:50:34 crc kubenswrapper[4849]: I1209 11:50:34.864734 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 11:50:34 crc kubenswrapper[4849]: I1209 11:50:34.873179 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 11:50:37 crc kubenswrapper[4849]: I1209 11:50:37.096517 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 11:50:37 crc kubenswrapper[4849]: I1209 11:50:37.097000 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 11:50:37 crc kubenswrapper[4849]: I1209 11:50:37.109499 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 11:50:37 crc kubenswrapper[4849]: I1209 11:50:37.894388 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 11:50:48 crc kubenswrapper[4849]: I1209 11:50:48.885740 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 11:50:50 crc kubenswrapper[4849]: I1209 11:50:50.359116 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 11:50:53 crc kubenswrapper[4849]: I1209 11:50:53.632526 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="86df3233-1d99-4023-9ff7-55bab063bd7e" containerName="rabbitmq" containerID="cri-o://86f5be6bd5c6c96ac6fa32eb10fb12b3a6ff13232f47af92aa7367509967bd66" gracePeriod=604796 Dec 09 11:50:54 crc kubenswrapper[4849]: I1209 11:50:54.743878 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="86df3233-1d99-4023-9ff7-55bab063bd7e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Dec 09 11:50:55 crc kubenswrapper[4849]: I1209 11:50:55.220950 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9e5432d8-b092-46cd-8aab-cb194ebb23f7" containerName="rabbitmq" containerID="cri-o://c0d978b70fbfc8c0981e21553c2a3b181faf7f586d0fb63f069d3e51bf82a010" gracePeriod=604796 Dec 09 11:50:55 crc kubenswrapper[4849]: I1209 11:50:55.238196 4849 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9e5432d8-b092-46cd-8aab-cb194ebb23f7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.089607 4849 generic.go:334] "Generic (PLEG): container finished" podID="86df3233-1d99-4023-9ff7-55bab063bd7e" containerID="86f5be6bd5c6c96ac6fa32eb10fb12b3a6ff13232f47af92aa7367509967bd66" exitCode=0 Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.089682 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86df3233-1d99-4023-9ff7-55bab063bd7e","Type":"ContainerDied","Data":"86f5be6bd5c6c96ac6fa32eb10fb12b3a6ff13232f47af92aa7367509967bd66"} Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.214019 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.371053 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-erlang-cookie\") pod \"86df3233-1d99-4023-9ff7-55bab063bd7e\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.371104 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-confd\") pod \"86df3233-1d99-4023-9ff7-55bab063bd7e\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.371125 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz9rl\" (UniqueName: \"kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-kube-api-access-cz9rl\") pod \"86df3233-1d99-4023-9ff7-55bab063bd7e\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.371225 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-server-conf\") pod \"86df3233-1d99-4023-9ff7-55bab063bd7e\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.371248 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-plugins\") pod \"86df3233-1d99-4023-9ff7-55bab063bd7e\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.371269 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-plugins-conf\") pod \"86df3233-1d99-4023-9ff7-55bab063bd7e\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.371287 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"86df3233-1d99-4023-9ff7-55bab063bd7e\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.371309 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-config-data\") pod \"86df3233-1d99-4023-9ff7-55bab063bd7e\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.371335 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-tls\") pod \"86df3233-1d99-4023-9ff7-55bab063bd7e\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.371365 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86df3233-1d99-4023-9ff7-55bab063bd7e-erlang-cookie-secret\") pod \"86df3233-1d99-4023-9ff7-55bab063bd7e\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.371399 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86df3233-1d99-4023-9ff7-55bab063bd7e-pod-info\") pod \"86df3233-1d99-4023-9ff7-55bab063bd7e\" (UID: \"86df3233-1d99-4023-9ff7-55bab063bd7e\") " Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.371786 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "86df3233-1d99-4023-9ff7-55bab063bd7e" (UID: "86df3233-1d99-4023-9ff7-55bab063bd7e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.372088 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "86df3233-1d99-4023-9ff7-55bab063bd7e" (UID: "86df3233-1d99-4023-9ff7-55bab063bd7e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.373631 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "86df3233-1d99-4023-9ff7-55bab063bd7e" (UID: "86df3233-1d99-4023-9ff7-55bab063bd7e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.380179 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "86df3233-1d99-4023-9ff7-55bab063bd7e" (UID: "86df3233-1d99-4023-9ff7-55bab063bd7e"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.380357 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "86df3233-1d99-4023-9ff7-55bab063bd7e" (UID: "86df3233-1d99-4023-9ff7-55bab063bd7e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.395640 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86df3233-1d99-4023-9ff7-55bab063bd7e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "86df3233-1d99-4023-9ff7-55bab063bd7e" (UID: "86df3233-1d99-4023-9ff7-55bab063bd7e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.395640 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/86df3233-1d99-4023-9ff7-55bab063bd7e-pod-info" (OuterVolumeSpecName: "pod-info") pod "86df3233-1d99-4023-9ff7-55bab063bd7e" (UID: "86df3233-1d99-4023-9ff7-55bab063bd7e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.395783 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-kube-api-access-cz9rl" (OuterVolumeSpecName: "kube-api-access-cz9rl") pod "86df3233-1d99-4023-9ff7-55bab063bd7e" (UID: "86df3233-1d99-4023-9ff7-55bab063bd7e"). InnerVolumeSpecName "kube-api-access-cz9rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.434354 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-config-data" (OuterVolumeSpecName: "config-data") pod "86df3233-1d99-4023-9ff7-55bab063bd7e" (UID: "86df3233-1d99-4023-9ff7-55bab063bd7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.472976 4849 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.473036 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz9rl\" (UniqueName: \"kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-kube-api-access-cz9rl\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.473046 4849 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.473090 4849 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.473120 4849 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.473131 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.473139 4849 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.473170 4849 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86df3233-1d99-4023-9ff7-55bab063bd7e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.473181 4849 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86df3233-1d99-4023-9ff7-55bab063bd7e-pod-info\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.475575 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-server-conf" (OuterVolumeSpecName: "server-conf") pod "86df3233-1d99-4023-9ff7-55bab063bd7e" (UID: "86df3233-1d99-4023-9ff7-55bab063bd7e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.493798 4849 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.576740 4849 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86df3233-1d99-4023-9ff7-55bab063bd7e-server-conf\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.576885 4849 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.587824 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "86df3233-1d99-4023-9ff7-55bab063bd7e" (UID: "86df3233-1d99-4023-9ff7-55bab063bd7e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:51:00 crc kubenswrapper[4849]: I1209 11:51:00.679296 4849 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86df3233-1d99-4023-9ff7-55bab063bd7e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.100822 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86df3233-1d99-4023-9ff7-55bab063bd7e","Type":"ContainerDied","Data":"35bdc790c9160fc885e9322eaf35192968b1ddcb96c810b9d83065ad8d76474d"} Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.100916 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.101120 4849 scope.go:117] "RemoveContainer" containerID="86f5be6bd5c6c96ac6fa32eb10fb12b3a6ff13232f47af92aa7367509967bd66" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.135241 4849 scope.go:117] "RemoveContainer" containerID="115a468a692fb19ce7f718c4cc8fdb83f4844c63e4e4a649b01cb6f4bca1c956" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.137561 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.186683 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.216065 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 11:51:01 crc kubenswrapper[4849]: E1209 11:51:01.222282 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86df3233-1d99-4023-9ff7-55bab063bd7e" containerName="setup-container" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.222526 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="86df3233-1d99-4023-9ff7-55bab063bd7e" containerName="setup-container" Dec 09 11:51:01 crc kubenswrapper[4849]: E1209 11:51:01.222605 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86df3233-1d99-4023-9ff7-55bab063bd7e" containerName="rabbitmq" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.222683 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="86df3233-1d99-4023-9ff7-55bab063bd7e" containerName="rabbitmq" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.223014 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="86df3233-1d99-4023-9ff7-55bab063bd7e" containerName="rabbitmq" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.224040 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.232288 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.232680 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5bghx" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.232750 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.232559 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.232908 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.233256 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.235436 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.242784 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.403541 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.403590 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.403617 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.403762 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.403792 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.403810 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.403843 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-config-data\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.403891 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.403924 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.404001 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bqwn\" (UniqueName: \"kubernetes.io/projected/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-kube-api-access-7bqwn\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.404054 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.505680 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.505782 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.505807 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.505828 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.505863 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-config-data\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.505895 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.505917 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.505973 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bqwn\" (UniqueName: \"kubernetes.io/projected/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-kube-api-access-7bqwn\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.506011 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.506067 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.506095 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.507369 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.507577 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.507678 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-config-data\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.508637 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.508865 4849 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.510955 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.512831 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.520007 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.521976 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.546232 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.551157 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.553872 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bqwn\" (UniqueName: \"kubernetes.io/projected/b6effe5a-3a21-4f55-905d-7f275cbe1f8f-kube-api-access-7bqwn\") pod \"rabbitmq-server-0\" (UID: \"b6effe5a-3a21-4f55-905d-7f275cbe1f8f\") " pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.722180 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.848481 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.914539 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-plugins\") pod \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.914673 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-config-data\") pod \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.914708 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.914746 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-server-conf\") pod \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.914808 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-confd\") pod \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.914852 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-tls\") pod \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.914897 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-erlang-cookie\") pod \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.914919 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e5432d8-b092-46cd-8aab-cb194ebb23f7-erlang-cookie-secret\") pod \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.914970 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-plugins-conf\") pod \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.915002 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9e5432d8-b092-46cd-8aab-cb194ebb23f7" (UID: "9e5432d8-b092-46cd-8aab-cb194ebb23f7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.915030 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8h4n\" (UniqueName: \"kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-kube-api-access-c8h4n\") pod \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.915055 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e5432d8-b092-46cd-8aab-cb194ebb23f7-pod-info\") pod \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\" (UID: \"9e5432d8-b092-46cd-8aab-cb194ebb23f7\") " Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.915512 4849 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.919178 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "9e5432d8-b092-46cd-8aab-cb194ebb23f7" (UID: "9e5432d8-b092-46cd-8aab-cb194ebb23f7"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.920070 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9e5432d8-b092-46cd-8aab-cb194ebb23f7" (UID: "9e5432d8-b092-46cd-8aab-cb194ebb23f7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.920639 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9e5432d8-b092-46cd-8aab-cb194ebb23f7" (UID: "9e5432d8-b092-46cd-8aab-cb194ebb23f7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.938705 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9e5432d8-b092-46cd-8aab-cb194ebb23f7-pod-info" (OuterVolumeSpecName: "pod-info") pod "9e5432d8-b092-46cd-8aab-cb194ebb23f7" (UID: "9e5432d8-b092-46cd-8aab-cb194ebb23f7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.939235 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e5432d8-b092-46cd-8aab-cb194ebb23f7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9e5432d8-b092-46cd-8aab-cb194ebb23f7" (UID: "9e5432d8-b092-46cd-8aab-cb194ebb23f7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.940347 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-kube-api-access-c8h4n" (OuterVolumeSpecName: "kube-api-access-c8h4n") pod "9e5432d8-b092-46cd-8aab-cb194ebb23f7" (UID: "9e5432d8-b092-46cd-8aab-cb194ebb23f7"). InnerVolumeSpecName "kube-api-access-c8h4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.944692 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9e5432d8-b092-46cd-8aab-cb194ebb23f7" (UID: "9e5432d8-b092-46cd-8aab-cb194ebb23f7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.994421 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-config-data" (OuterVolumeSpecName: "config-data") pod "9e5432d8-b092-46cd-8aab-cb194ebb23f7" (UID: "9e5432d8-b092-46cd-8aab-cb194ebb23f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:51:01 crc kubenswrapper[4849]: I1209 11:51:01.996860 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-server-conf" (OuterVolumeSpecName: "server-conf") pod "9e5432d8-b092-46cd-8aab-cb194ebb23f7" (UID: "9e5432d8-b092-46cd-8aab-cb194ebb23f7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.018186 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.018235 4849 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.018245 4849 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-server-conf\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.018254 4849 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.018264 4849 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e5432d8-b092-46cd-8aab-cb194ebb23f7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.018274 4849 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.018282 4849 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e5432d8-b092-46cd-8aab-cb194ebb23f7-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.018290 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8h4n\" (UniqueName: \"kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-kube-api-access-c8h4n\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.018298 4849 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e5432d8-b092-46cd-8aab-cb194ebb23f7-pod-info\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.046685 4849 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.064146 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9e5432d8-b092-46cd-8aab-cb194ebb23f7" (UID: "9e5432d8-b092-46cd-8aab-cb194ebb23f7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.116436 4849 generic.go:334] "Generic (PLEG): container finished" podID="9e5432d8-b092-46cd-8aab-cb194ebb23f7" containerID="c0d978b70fbfc8c0981e21553c2a3b181faf7f586d0fb63f069d3e51bf82a010" exitCode=0 Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.116476 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e5432d8-b092-46cd-8aab-cb194ebb23f7","Type":"ContainerDied","Data":"c0d978b70fbfc8c0981e21553c2a3b181faf7f586d0fb63f069d3e51bf82a010"} Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.116517 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e5432d8-b092-46cd-8aab-cb194ebb23f7","Type":"ContainerDied","Data":"dd973ce95c9b556a1379c4bdc8887e0cea0e270733ba531cf5957d98663c5b56"} Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.116533 4849 scope.go:117] "RemoveContainer" containerID="c0d978b70fbfc8c0981e21553c2a3b181faf7f586d0fb63f069d3e51bf82a010" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.116700 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.124622 4849 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.124887 4849 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e5432d8-b092-46cd-8aab-cb194ebb23f7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.168951 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.171152 4849 scope.go:117] "RemoveContainer" containerID="f14aeff8e9a699d29584826d972fe0d7f09877c465bb6ca7e4904b9f39c56663" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.188670 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.214581 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 11:51:02 crc kubenswrapper[4849]: E1209 11:51:02.215039 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e5432d8-b092-46cd-8aab-cb194ebb23f7" containerName="rabbitmq" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.215059 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e5432d8-b092-46cd-8aab-cb194ebb23f7" containerName="rabbitmq" Dec 09 11:51:02 crc kubenswrapper[4849]: E1209 11:51:02.215102 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e5432d8-b092-46cd-8aab-cb194ebb23f7" containerName="setup-container" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.215111 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e5432d8-b092-46cd-8aab-cb194ebb23f7" containerName="setup-container" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.215316 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e5432d8-b092-46cd-8aab-cb194ebb23f7" containerName="rabbitmq" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.216456 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.222017 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.222052 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.222256 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.222342 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.222440 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.222567 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.222643 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4jdlp" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.228341 4849 scope.go:117] "RemoveContainer" containerID="c0d978b70fbfc8c0981e21553c2a3b181faf7f586d0fb63f069d3e51bf82a010" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.228984 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 11:51:02 crc kubenswrapper[4849]: E1209 11:51:02.233801 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0d978b70fbfc8c0981e21553c2a3b181faf7f586d0fb63f069d3e51bf82a010\": container with ID starting with c0d978b70fbfc8c0981e21553c2a3b181faf7f586d0fb63f069d3e51bf82a010 not found: ID does not exist" containerID="c0d978b70fbfc8c0981e21553c2a3b181faf7f586d0fb63f069d3e51bf82a010" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.233852 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0d978b70fbfc8c0981e21553c2a3b181faf7f586d0fb63f069d3e51bf82a010"} err="failed to get container status \"c0d978b70fbfc8c0981e21553c2a3b181faf7f586d0fb63f069d3e51bf82a010\": rpc error: code = NotFound desc = could not find container \"c0d978b70fbfc8c0981e21553c2a3b181faf7f586d0fb63f069d3e51bf82a010\": container with ID starting with c0d978b70fbfc8c0981e21553c2a3b181faf7f586d0fb63f069d3e51bf82a010 not found: ID does not exist" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.233886 4849 scope.go:117] "RemoveContainer" containerID="f14aeff8e9a699d29584826d972fe0d7f09877c465bb6ca7e4904b9f39c56663" Dec 09 11:51:02 crc kubenswrapper[4849]: E1209 11:51:02.236898 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14aeff8e9a699d29584826d972fe0d7f09877c465bb6ca7e4904b9f39c56663\": container with ID starting with f14aeff8e9a699d29584826d972fe0d7f09877c465bb6ca7e4904b9f39c56663 not found: ID does not exist" containerID="f14aeff8e9a699d29584826d972fe0d7f09877c465bb6ca7e4904b9f39c56663" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.236947 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14aeff8e9a699d29584826d972fe0d7f09877c465bb6ca7e4904b9f39c56663"} err="failed to get container status \"f14aeff8e9a699d29584826d972fe0d7f09877c465bb6ca7e4904b9f39c56663\": rpc error: code = NotFound desc = could not find container \"f14aeff8e9a699d29584826d972fe0d7f09877c465bb6ca7e4904b9f39c56663\": container with ID starting with f14aeff8e9a699d29584826d972fe0d7f09877c465bb6ca7e4904b9f39c56663 not found: ID does not exist" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.332728 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdgqd\" (UniqueName: \"kubernetes.io/projected/ec518407-e004-4dde-8a57-91307009b4a3-kube-api-access-vdgqd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.332832 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec518407-e004-4dde-8a57-91307009b4a3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.332861 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec518407-e004-4dde-8a57-91307009b4a3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.332915 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec518407-e004-4dde-8a57-91307009b4a3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.332950 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec518407-e004-4dde-8a57-91307009b4a3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.333030 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec518407-e004-4dde-8a57-91307009b4a3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.333059 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec518407-e004-4dde-8a57-91307009b4a3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.333098 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec518407-e004-4dde-8a57-91307009b4a3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.333145 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec518407-e004-4dde-8a57-91307009b4a3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.333171 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.333202 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec518407-e004-4dde-8a57-91307009b4a3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.372623 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 11:51:02 crc kubenswrapper[4849]: W1209 11:51:02.375574 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6effe5a_3a21_4f55_905d_7f275cbe1f8f.slice/crio-24b8b24c7a9cbce03063f18ae26e4be3322387f1252220e51364cba68ba2e96e WatchSource:0}: Error finding container 24b8b24c7a9cbce03063f18ae26e4be3322387f1252220e51364cba68ba2e96e: Status 404 returned error can't find the container with id 24b8b24c7a9cbce03063f18ae26e4be3322387f1252220e51364cba68ba2e96e Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.434700 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec518407-e004-4dde-8a57-91307009b4a3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.434756 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec518407-e004-4dde-8a57-91307009b4a3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.434810 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec518407-e004-4dde-8a57-91307009b4a3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.434834 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec518407-e004-4dde-8a57-91307009b4a3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.434861 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec518407-e004-4dde-8a57-91307009b4a3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.434900 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec518407-e004-4dde-8a57-91307009b4a3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.434918 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.434941 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec518407-e004-4dde-8a57-91307009b4a3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.434971 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdgqd\" (UniqueName: \"kubernetes.io/projected/ec518407-e004-4dde-8a57-91307009b4a3-kube-api-access-vdgqd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.435003 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec518407-e004-4dde-8a57-91307009b4a3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.435022 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec518407-e004-4dde-8a57-91307009b4a3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.435831 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec518407-e004-4dde-8a57-91307009b4a3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.435918 4849 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.436131 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec518407-e004-4dde-8a57-91307009b4a3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.436157 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec518407-e004-4dde-8a57-91307009b4a3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.436849 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec518407-e004-4dde-8a57-91307009b4a3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.436928 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec518407-e004-4dde-8a57-91307009b4a3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.442912 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec518407-e004-4dde-8a57-91307009b4a3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.445508 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec518407-e004-4dde-8a57-91307009b4a3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.446169 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec518407-e004-4dde-8a57-91307009b4a3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.446787 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec518407-e004-4dde-8a57-91307009b4a3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.452063 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdgqd\" (UniqueName: \"kubernetes.io/projected/ec518407-e004-4dde-8a57-91307009b4a3-kube-api-access-vdgqd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.473663 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec518407-e004-4dde-8a57-91307009b4a3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.546976 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86df3233-1d99-4023-9ff7-55bab063bd7e" path="/var/lib/kubelet/pods/86df3233-1d99-4023-9ff7-55bab063bd7e/volumes" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.548064 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e5432d8-b092-46cd-8aab-cb194ebb23f7" path="/var/lib/kubelet/pods/9e5432d8-b092-46cd-8aab-cb194ebb23f7/volumes" Dec 09 11:51:02 crc kubenswrapper[4849]: I1209 11:51:02.551507 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:03 crc kubenswrapper[4849]: I1209 11:51:03.091963 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 11:51:03 crc kubenswrapper[4849]: W1209 11:51:03.098614 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec518407_e004_4dde_8a57_91307009b4a3.slice/crio-1acb9ffdacb4b1d98fe2ac949034cfba02a756b60ae6adfa66a60a40c04e5800 WatchSource:0}: Error finding container 1acb9ffdacb4b1d98fe2ac949034cfba02a756b60ae6adfa66a60a40c04e5800: Status 404 returned error can't find the container with id 1acb9ffdacb4b1d98fe2ac949034cfba02a756b60ae6adfa66a60a40c04e5800 Dec 09 11:51:03 crc kubenswrapper[4849]: I1209 11:51:03.128192 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b6effe5a-3a21-4f55-905d-7f275cbe1f8f","Type":"ContainerStarted","Data":"24b8b24c7a9cbce03063f18ae26e4be3322387f1252220e51364cba68ba2e96e"} Dec 09 11:51:03 crc kubenswrapper[4849]: I1209 11:51:03.130178 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec518407-e004-4dde-8a57-91307009b4a3","Type":"ContainerStarted","Data":"1acb9ffdacb4b1d98fe2ac949034cfba02a756b60ae6adfa66a60a40c04e5800"} Dec 09 11:51:04 crc kubenswrapper[4849]: I1209 11:51:04.139208 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b6effe5a-3a21-4f55-905d-7f275cbe1f8f","Type":"ContainerStarted","Data":"66333241c4e14d4eb1b1ca6399ef2397e525cb2f1e864f956b00c1287f74707b"} Dec 09 11:51:05 crc kubenswrapper[4849]: I1209 11:51:05.148985 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec518407-e004-4dde-8a57-91307009b4a3","Type":"ContainerStarted","Data":"81bdbcd7638738c39522d47e7afbaa420ec9038ffd4619af43bafafe683f2b8c"} Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.560796 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-pwfdl"] Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.562791 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.567112 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.582480 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-pwfdl"] Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.731864 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-config\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.731938 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.732025 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.732053 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.732074 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.732487 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv779\" (UniqueName: \"kubernetes.io/projected/5c0a4a70-8401-4069-9849-048237a624b9-kube-api-access-rv779\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.834177 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv779\" (UniqueName: \"kubernetes.io/projected/5c0a4a70-8401-4069-9849-048237a624b9-kube-api-access-rv779\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.834251 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-config\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.834290 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.834339 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.834364 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.834388 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.835257 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.835310 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.835427 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-config\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.835515 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.836083 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.857199 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv779\" (UniqueName: \"kubernetes.io/projected/5c0a4a70-8401-4069-9849-048237a624b9-kube-api-access-rv779\") pod \"dnsmasq-dns-6447ccbd8f-pwfdl\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:09 crc kubenswrapper[4849]: I1209 11:51:09.880620 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:10 crc kubenswrapper[4849]: I1209 11:51:10.335616 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-pwfdl"] Dec 09 11:51:11 crc kubenswrapper[4849]: I1209 11:51:11.208999 4849 generic.go:334] "Generic (PLEG): container finished" podID="5c0a4a70-8401-4069-9849-048237a624b9" containerID="bcf6fdff59036421800c1af10eabbd908a5a9f1bd56161b442a6c0e5ba510a95" exitCode=0 Dec 09 11:51:11 crc kubenswrapper[4849]: I1209 11:51:11.209212 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" event={"ID":"5c0a4a70-8401-4069-9849-048237a624b9","Type":"ContainerDied","Data":"bcf6fdff59036421800c1af10eabbd908a5a9f1bd56161b442a6c0e5ba510a95"} Dec 09 11:51:11 crc kubenswrapper[4849]: I1209 11:51:11.210427 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" event={"ID":"5c0a4a70-8401-4069-9849-048237a624b9","Type":"ContainerStarted","Data":"8ad176871283fdf8346f7d53c1cf3d897108c122bf864a5a69713e1cc93f5434"} Dec 09 11:51:12 crc kubenswrapper[4849]: I1209 11:51:12.221809 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" event={"ID":"5c0a4a70-8401-4069-9849-048237a624b9","Type":"ContainerStarted","Data":"39aa1c129e0be9a9402898b978b41cab4f6c507882d1800a5d1a09b3e5ac9af0"} Dec 09 11:51:12 crc kubenswrapper[4849]: I1209 11:51:12.222221 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:12 crc kubenswrapper[4849]: I1209 11:51:12.253137 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" podStartSLOduration=3.253117708 podStartE2EDuration="3.253117708s" podCreationTimestamp="2025-12-09 11:51:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:51:12.242850322 +0000 UTC m=+1454.782734658" watchObservedRunningTime="2025-12-09 11:51:12.253117708 +0000 UTC m=+1454.793002024" Dec 09 11:51:19 crc kubenswrapper[4849]: I1209 11:51:19.881554 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:19 crc kubenswrapper[4849]: I1209 11:51:19.956590 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-zdq6r"] Dec 09 11:51:19 crc kubenswrapper[4849]: I1209 11:51:19.960035 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" podUID="42813931-a611-48f1-930f-97bc3e9cf6ac" containerName="dnsmasq-dns" containerID="cri-o://ec803d34d2ed0b646b83507ff1d002aa986892bd0808b40e1701de7d68a6eb7f" gracePeriod=10 Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.114426 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fb68d687f-pq4dx"] Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.123713 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.164004 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fb68d687f-pq4dx"] Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.265367 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-dns-svc\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.265523 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-ovsdbserver-nb\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.265564 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-config\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.265598 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-ovsdbserver-sb\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.265652 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-openstack-edpm-ipam\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.265708 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72x8l\" (UniqueName: \"kubernetes.io/projected/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-kube-api-access-72x8l\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.303193 4849 generic.go:334] "Generic (PLEG): container finished" podID="42813931-a611-48f1-930f-97bc3e9cf6ac" containerID="ec803d34d2ed0b646b83507ff1d002aa986892bd0808b40e1701de7d68a6eb7f" exitCode=0 Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.303247 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" event={"ID":"42813931-a611-48f1-930f-97bc3e9cf6ac","Type":"ContainerDied","Data":"ec803d34d2ed0b646b83507ff1d002aa986892bd0808b40e1701de7d68a6eb7f"} Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.367675 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-dns-svc\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.368079 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-ovsdbserver-nb\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.368106 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-config\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.368129 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-ovsdbserver-sb\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.368155 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-openstack-edpm-ipam\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.368194 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72x8l\" (UniqueName: \"kubernetes.io/projected/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-kube-api-access-72x8l\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.369452 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-dns-svc\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.381319 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-ovsdbserver-nb\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.385071 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-config\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.385071 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-ovsdbserver-sb\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.385272 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-openstack-edpm-ipam\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.393331 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72x8l\" (UniqueName: \"kubernetes.io/projected/1136dca8-4c2e-45f6-81bf-0b990a6af3b7-kube-api-access-72x8l\") pod \"dnsmasq-dns-fb68d687f-pq4dx\" (UID: \"1136dca8-4c2e-45f6-81bf-0b990a6af3b7\") " pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.483301 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.572519 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.671827 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-ovsdbserver-sb\") pod \"42813931-a611-48f1-930f-97bc3e9cf6ac\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.671868 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-ovsdbserver-nb\") pod \"42813931-a611-48f1-930f-97bc3e9cf6ac\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.674306 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d2rn\" (UniqueName: \"kubernetes.io/projected/42813931-a611-48f1-930f-97bc3e9cf6ac-kube-api-access-7d2rn\") pod \"42813931-a611-48f1-930f-97bc3e9cf6ac\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.674377 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-dns-svc\") pod \"42813931-a611-48f1-930f-97bc3e9cf6ac\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.674677 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-config\") pod \"42813931-a611-48f1-930f-97bc3e9cf6ac\" (UID: \"42813931-a611-48f1-930f-97bc3e9cf6ac\") " Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.683647 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42813931-a611-48f1-930f-97bc3e9cf6ac-kube-api-access-7d2rn" (OuterVolumeSpecName: "kube-api-access-7d2rn") pod "42813931-a611-48f1-930f-97bc3e9cf6ac" (UID: "42813931-a611-48f1-930f-97bc3e9cf6ac"). InnerVolumeSpecName "kube-api-access-7d2rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.723178 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-config" (OuterVolumeSpecName: "config") pod "42813931-a611-48f1-930f-97bc3e9cf6ac" (UID: "42813931-a611-48f1-930f-97bc3e9cf6ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.729392 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42813931-a611-48f1-930f-97bc3e9cf6ac" (UID: "42813931-a611-48f1-930f-97bc3e9cf6ac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.729422 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "42813931-a611-48f1-930f-97bc3e9cf6ac" (UID: "42813931-a611-48f1-930f-97bc3e9cf6ac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.756429 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42813931-a611-48f1-930f-97bc3e9cf6ac" (UID: "42813931-a611-48f1-930f-97bc3e9cf6ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.780111 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.780163 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.780179 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.780191 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d2rn\" (UniqueName: \"kubernetes.io/projected/42813931-a611-48f1-930f-97bc3e9cf6ac-kube-api-access-7d2rn\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.780201 4849 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42813931-a611-48f1-930f-97bc3e9cf6ac-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:20 crc kubenswrapper[4849]: I1209 11:51:20.972004 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fb68d687f-pq4dx"] Dec 09 11:51:21 crc kubenswrapper[4849]: I1209 11:51:21.136926 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:51:21 crc kubenswrapper[4849]: I1209 11:51:21.136985 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:51:21 crc kubenswrapper[4849]: I1209 11:51:21.313393 4849 generic.go:334] "Generic (PLEG): container finished" podID="1136dca8-4c2e-45f6-81bf-0b990a6af3b7" containerID="1c77821c2ae8a1732ed2101a6d23b476cd7e5d67d5f40aa2733e2709388856d1" exitCode=0 Dec 09 11:51:21 crc kubenswrapper[4849]: I1209 11:51:21.313473 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" event={"ID":"1136dca8-4c2e-45f6-81bf-0b990a6af3b7","Type":"ContainerDied","Data":"1c77821c2ae8a1732ed2101a6d23b476cd7e5d67d5f40aa2733e2709388856d1"} Dec 09 11:51:21 crc kubenswrapper[4849]: I1209 11:51:21.313498 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" event={"ID":"1136dca8-4c2e-45f6-81bf-0b990a6af3b7","Type":"ContainerStarted","Data":"6b21defd6c1e7c74243ca1b8e91ce0f522d0f2e2d5f0c355835e64a59557cb68"} Dec 09 11:51:21 crc kubenswrapper[4849]: I1209 11:51:21.321868 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" event={"ID":"42813931-a611-48f1-930f-97bc3e9cf6ac","Type":"ContainerDied","Data":"01cf1f59861b90d124059265a35eb3363d1347ba2074afdcced6d2305ee8bfcb"} Dec 09 11:51:21 crc kubenswrapper[4849]: I1209 11:51:21.321915 4849 scope.go:117] "RemoveContainer" containerID="ec803d34d2ed0b646b83507ff1d002aa986892bd0808b40e1701de7d68a6eb7f" Dec 09 11:51:21 crc kubenswrapper[4849]: I1209 11:51:21.322061 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-zdq6r" Dec 09 11:51:21 crc kubenswrapper[4849]: I1209 11:51:21.467086 4849 scope.go:117] "RemoveContainer" containerID="97a172885be55de61ac16a029bcad5b8a767c25424c07b6163717ce143ad333f" Dec 09 11:51:21 crc kubenswrapper[4849]: I1209 11:51:21.484778 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-zdq6r"] Dec 09 11:51:21 crc kubenswrapper[4849]: I1209 11:51:21.494164 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-zdq6r"] Dec 09 11:51:22 crc kubenswrapper[4849]: I1209 11:51:22.334257 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" event={"ID":"1136dca8-4c2e-45f6-81bf-0b990a6af3b7","Type":"ContainerStarted","Data":"2fdb3a0c93acb3d3e0de250e0928c1385676117bb7a2cdad0e85735e1c63a794"} Dec 09 11:51:22 crc kubenswrapper[4849]: I1209 11:51:22.334748 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:22 crc kubenswrapper[4849]: I1209 11:51:22.355712 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" podStartSLOduration=2.355690472 podStartE2EDuration="2.355690472s" podCreationTimestamp="2025-12-09 11:51:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:51:22.354295558 +0000 UTC m=+1464.894179884" watchObservedRunningTime="2025-12-09 11:51:22.355690472 +0000 UTC m=+1464.895574788" Dec 09 11:51:22 crc kubenswrapper[4849]: I1209 11:51:22.549212 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42813931-a611-48f1-930f-97bc3e9cf6ac" path="/var/lib/kubelet/pods/42813931-a611-48f1-930f-97bc3e9cf6ac/volumes" Dec 09 11:51:30 crc kubenswrapper[4849]: I1209 11:51:30.485494 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fb68d687f-pq4dx" Dec 09 11:51:30 crc kubenswrapper[4849]: I1209 11:51:30.588611 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-pwfdl"] Dec 09 11:51:30 crc kubenswrapper[4849]: I1209 11:51:30.588900 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" podUID="5c0a4a70-8401-4069-9849-048237a624b9" containerName="dnsmasq-dns" containerID="cri-o://39aa1c129e0be9a9402898b978b41cab4f6c507882d1800a5d1a09b3e5ac9af0" gracePeriod=10 Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.062634 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.197068 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-ovsdbserver-sb\") pod \"5c0a4a70-8401-4069-9849-048237a624b9\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.197208 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-openstack-edpm-ipam\") pod \"5c0a4a70-8401-4069-9849-048237a624b9\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.197315 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-dns-svc\") pod \"5c0a4a70-8401-4069-9849-048237a624b9\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.197401 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv779\" (UniqueName: \"kubernetes.io/projected/5c0a4a70-8401-4069-9849-048237a624b9-kube-api-access-rv779\") pod \"5c0a4a70-8401-4069-9849-048237a624b9\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.197478 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-config\") pod \"5c0a4a70-8401-4069-9849-048237a624b9\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.197509 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-ovsdbserver-nb\") pod \"5c0a4a70-8401-4069-9849-048237a624b9\" (UID: \"5c0a4a70-8401-4069-9849-048237a624b9\") " Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.204188 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c0a4a70-8401-4069-9849-048237a624b9-kube-api-access-rv779" (OuterVolumeSpecName: "kube-api-access-rv779") pod "5c0a4a70-8401-4069-9849-048237a624b9" (UID: "5c0a4a70-8401-4069-9849-048237a624b9"). InnerVolumeSpecName "kube-api-access-rv779". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.256364 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c0a4a70-8401-4069-9849-048237a624b9" (UID: "5c0a4a70-8401-4069-9849-048237a624b9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.274803 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5c0a4a70-8401-4069-9849-048237a624b9" (UID: "5c0a4a70-8401-4069-9849-048237a624b9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.289878 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-config" (OuterVolumeSpecName: "config") pod "5c0a4a70-8401-4069-9849-048237a624b9" (UID: "5c0a4a70-8401-4069-9849-048237a624b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.290287 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5c0a4a70-8401-4069-9849-048237a624b9" (UID: "5c0a4a70-8401-4069-9849-048237a624b9"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.291245 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5c0a4a70-8401-4069-9849-048237a624b9" (UID: "5c0a4a70-8401-4069-9849-048237a624b9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.299082 4849 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.299107 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv779\" (UniqueName: \"kubernetes.io/projected/5c0a4a70-8401-4069-9849-048237a624b9-kube-api-access-rv779\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.299117 4849 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-config\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.299127 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.299135 4849 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.299143 4849 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5c0a4a70-8401-4069-9849-048237a624b9-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.423070 4849 generic.go:334] "Generic (PLEG): container finished" podID="5c0a4a70-8401-4069-9849-048237a624b9" containerID="39aa1c129e0be9a9402898b978b41cab4f6c507882d1800a5d1a09b3e5ac9af0" exitCode=0 Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.423125 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" event={"ID":"5c0a4a70-8401-4069-9849-048237a624b9","Type":"ContainerDied","Data":"39aa1c129e0be9a9402898b978b41cab4f6c507882d1800a5d1a09b3e5ac9af0"} Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.423156 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" event={"ID":"5c0a4a70-8401-4069-9849-048237a624b9","Type":"ContainerDied","Data":"8ad176871283fdf8346f7d53c1cf3d897108c122bf864a5a69713e1cc93f5434"} Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.423178 4849 scope.go:117] "RemoveContainer" containerID="39aa1c129e0be9a9402898b978b41cab4f6c507882d1800a5d1a09b3e5ac9af0" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.423541 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-pwfdl" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.448901 4849 scope.go:117] "RemoveContainer" containerID="bcf6fdff59036421800c1af10eabbd908a5a9f1bd56161b442a6c0e5ba510a95" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.460014 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-pwfdl"] Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.469529 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-pwfdl"] Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.474810 4849 scope.go:117] "RemoveContainer" containerID="39aa1c129e0be9a9402898b978b41cab4f6c507882d1800a5d1a09b3e5ac9af0" Dec 09 11:51:31 crc kubenswrapper[4849]: E1209 11:51:31.479517 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39aa1c129e0be9a9402898b978b41cab4f6c507882d1800a5d1a09b3e5ac9af0\": container with ID starting with 39aa1c129e0be9a9402898b978b41cab4f6c507882d1800a5d1a09b3e5ac9af0 not found: ID does not exist" containerID="39aa1c129e0be9a9402898b978b41cab4f6c507882d1800a5d1a09b3e5ac9af0" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.479569 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39aa1c129e0be9a9402898b978b41cab4f6c507882d1800a5d1a09b3e5ac9af0"} err="failed to get container status \"39aa1c129e0be9a9402898b978b41cab4f6c507882d1800a5d1a09b3e5ac9af0\": rpc error: code = NotFound desc = could not find container \"39aa1c129e0be9a9402898b978b41cab4f6c507882d1800a5d1a09b3e5ac9af0\": container with ID starting with 39aa1c129e0be9a9402898b978b41cab4f6c507882d1800a5d1a09b3e5ac9af0 not found: ID does not exist" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.479631 4849 scope.go:117] "RemoveContainer" containerID="bcf6fdff59036421800c1af10eabbd908a5a9f1bd56161b442a6c0e5ba510a95" Dec 09 11:51:31 crc kubenswrapper[4849]: E1209 11:51:31.479977 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcf6fdff59036421800c1af10eabbd908a5a9f1bd56161b442a6c0e5ba510a95\": container with ID starting with bcf6fdff59036421800c1af10eabbd908a5a9f1bd56161b442a6c0e5ba510a95 not found: ID does not exist" containerID="bcf6fdff59036421800c1af10eabbd908a5a9f1bd56161b442a6c0e5ba510a95" Dec 09 11:51:31 crc kubenswrapper[4849]: I1209 11:51:31.480013 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcf6fdff59036421800c1af10eabbd908a5a9f1bd56161b442a6c0e5ba510a95"} err="failed to get container status \"bcf6fdff59036421800c1af10eabbd908a5a9f1bd56161b442a6c0e5ba510a95\": rpc error: code = NotFound desc = could not find container \"bcf6fdff59036421800c1af10eabbd908a5a9f1bd56161b442a6c0e5ba510a95\": container with ID starting with bcf6fdff59036421800c1af10eabbd908a5a9f1bd56161b442a6c0e5ba510a95 not found: ID does not exist" Dec 09 11:51:32 crc kubenswrapper[4849]: I1209 11:51:32.546815 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c0a4a70-8401-4069-9849-048237a624b9" path="/var/lib/kubelet/pods/5c0a4a70-8401-4069-9849-048237a624b9/volumes" Dec 09 11:51:35 crc kubenswrapper[4849]: E1209 11:51:35.957939 4849 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6effe5a_3a21_4f55_905d_7f275cbe1f8f.slice/crio-66333241c4e14d4eb1b1ca6399ef2397e525cb2f1e864f956b00c1287f74707b.scope\": RecentStats: unable to find data in memory cache]" Dec 09 11:51:36 crc kubenswrapper[4849]: I1209 11:51:36.468468 4849 generic.go:334] "Generic (PLEG): container finished" podID="b6effe5a-3a21-4f55-905d-7f275cbe1f8f" containerID="66333241c4e14d4eb1b1ca6399ef2397e525cb2f1e864f956b00c1287f74707b" exitCode=0 Dec 09 11:51:36 crc kubenswrapper[4849]: I1209 11:51:36.468534 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b6effe5a-3a21-4f55-905d-7f275cbe1f8f","Type":"ContainerDied","Data":"66333241c4e14d4eb1b1ca6399ef2397e525cb2f1e864f956b00c1287f74707b"} Dec 09 11:51:37 crc kubenswrapper[4849]: I1209 11:51:37.478449 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b6effe5a-3a21-4f55-905d-7f275cbe1f8f","Type":"ContainerStarted","Data":"79bdae7249004d1a474319cd4181210bcec90638f29ebb2c8c191531d2359ab3"} Dec 09 11:51:37 crc kubenswrapper[4849]: I1209 11:51:37.478984 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 09 11:51:37 crc kubenswrapper[4849]: I1209 11:51:37.480176 4849 generic.go:334] "Generic (PLEG): container finished" podID="ec518407-e004-4dde-8a57-91307009b4a3" containerID="81bdbcd7638738c39522d47e7afbaa420ec9038ffd4619af43bafafe683f2b8c" exitCode=0 Dec 09 11:51:37 crc kubenswrapper[4849]: I1209 11:51:37.480233 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec518407-e004-4dde-8a57-91307009b4a3","Type":"ContainerDied","Data":"81bdbcd7638738c39522d47e7afbaa420ec9038ffd4619af43bafafe683f2b8c"} Dec 09 11:51:37 crc kubenswrapper[4849]: I1209 11:51:37.520199 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.520176656 podStartE2EDuration="36.520176656s" podCreationTimestamp="2025-12-09 11:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:51:37.515127 +0000 UTC m=+1480.055011336" watchObservedRunningTime="2025-12-09 11:51:37.520176656 +0000 UTC m=+1480.060060972" Dec 09 11:51:38 crc kubenswrapper[4849]: I1209 11:51:38.490169 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec518407-e004-4dde-8a57-91307009b4a3","Type":"ContainerStarted","Data":"c4f160ebfe078fdf58b19c78a5ccf4fb6a70c4e714908d0be6d9b9a3263e38b6"} Dec 09 11:51:38 crc kubenswrapper[4849]: I1209 11:51:38.491583 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:38 crc kubenswrapper[4849]: I1209 11:51:38.529647 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.529628277 podStartE2EDuration="36.529628277s" podCreationTimestamp="2025-12-09 11:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 11:51:38.517842243 +0000 UTC m=+1481.057726569" watchObservedRunningTime="2025-12-09 11:51:38.529628277 +0000 UTC m=+1481.069512583" Dec 09 11:51:40 crc kubenswrapper[4849]: I1209 11:51:40.927958 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6"] Dec 09 11:51:40 crc kubenswrapper[4849]: E1209 11:51:40.928723 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c0a4a70-8401-4069-9849-048237a624b9" containerName="dnsmasq-dns" Dec 09 11:51:40 crc kubenswrapper[4849]: I1209 11:51:40.928741 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c0a4a70-8401-4069-9849-048237a624b9" containerName="dnsmasq-dns" Dec 09 11:51:40 crc kubenswrapper[4849]: E1209 11:51:40.928773 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42813931-a611-48f1-930f-97bc3e9cf6ac" containerName="init" Dec 09 11:51:40 crc kubenswrapper[4849]: I1209 11:51:40.928784 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="42813931-a611-48f1-930f-97bc3e9cf6ac" containerName="init" Dec 09 11:51:40 crc kubenswrapper[4849]: E1209 11:51:40.928796 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42813931-a611-48f1-930f-97bc3e9cf6ac" containerName="dnsmasq-dns" Dec 09 11:51:40 crc kubenswrapper[4849]: I1209 11:51:40.928805 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="42813931-a611-48f1-930f-97bc3e9cf6ac" containerName="dnsmasq-dns" Dec 09 11:51:40 crc kubenswrapper[4849]: E1209 11:51:40.928827 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c0a4a70-8401-4069-9849-048237a624b9" containerName="init" Dec 09 11:51:40 crc kubenswrapper[4849]: I1209 11:51:40.928835 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c0a4a70-8401-4069-9849-048237a624b9" containerName="init" Dec 09 11:51:40 crc kubenswrapper[4849]: I1209 11:51:40.929037 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c0a4a70-8401-4069-9849-048237a624b9" containerName="dnsmasq-dns" Dec 09 11:51:40 crc kubenswrapper[4849]: I1209 11:51:40.929071 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="42813931-a611-48f1-930f-97bc3e9cf6ac" containerName="dnsmasq-dns" Dec 09 11:51:40 crc kubenswrapper[4849]: I1209 11:51:40.929798 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:51:40 crc kubenswrapper[4849]: I1209 11:51:40.932346 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 11:51:40 crc kubenswrapper[4849]: I1209 11:51:40.932358 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 11:51:40 crc kubenswrapper[4849]: I1209 11:51:40.933164 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j9nv" Dec 09 11:51:40 crc kubenswrapper[4849]: I1209 11:51:40.937428 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 11:51:40 crc kubenswrapper[4849]: I1209 11:51:40.957127 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6"] Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.037881 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2q9dx"] Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.039633 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.059295 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kllpl\" (UniqueName: \"kubernetes.io/projected/e366a1ff-a008-4f60-ba19-c4628338ab7d-kube-api-access-kllpl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.059401 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.059459 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.059581 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.077661 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2q9dx"] Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.161872 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aadeb44d-5735-4450-a10f-be5d224dc95b-catalog-content\") pod \"redhat-operators-2q9dx\" (UID: \"aadeb44d-5735-4450-a10f-be5d224dc95b\") " pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.161943 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24dc8\" (UniqueName: \"kubernetes.io/projected/aadeb44d-5735-4450-a10f-be5d224dc95b-kube-api-access-24dc8\") pod \"redhat-operators-2q9dx\" (UID: \"aadeb44d-5735-4450-a10f-be5d224dc95b\") " pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.162006 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.162060 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kllpl\" (UniqueName: \"kubernetes.io/projected/e366a1ff-a008-4f60-ba19-c4628338ab7d-kube-api-access-kllpl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.162137 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aadeb44d-5735-4450-a10f-be5d224dc95b-utilities\") pod \"redhat-operators-2q9dx\" (UID: \"aadeb44d-5735-4450-a10f-be5d224dc95b\") " pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.162167 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.162199 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.168217 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.168236 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.174758 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.182266 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kllpl\" (UniqueName: \"kubernetes.io/projected/e366a1ff-a008-4f60-ba19-c4628338ab7d-kube-api-access-kllpl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.249496 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.263403 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aadeb44d-5735-4450-a10f-be5d224dc95b-catalog-content\") pod \"redhat-operators-2q9dx\" (UID: \"aadeb44d-5735-4450-a10f-be5d224dc95b\") " pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.263662 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24dc8\" (UniqueName: \"kubernetes.io/projected/aadeb44d-5735-4450-a10f-be5d224dc95b-kube-api-access-24dc8\") pod \"redhat-operators-2q9dx\" (UID: \"aadeb44d-5735-4450-a10f-be5d224dc95b\") " pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.263856 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aadeb44d-5735-4450-a10f-be5d224dc95b-utilities\") pod \"redhat-operators-2q9dx\" (UID: \"aadeb44d-5735-4450-a10f-be5d224dc95b\") " pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.264327 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aadeb44d-5735-4450-a10f-be5d224dc95b-catalog-content\") pod \"redhat-operators-2q9dx\" (UID: \"aadeb44d-5735-4450-a10f-be5d224dc95b\") " pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.264386 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aadeb44d-5735-4450-a10f-be5d224dc95b-utilities\") pod \"redhat-operators-2q9dx\" (UID: \"aadeb44d-5735-4450-a10f-be5d224dc95b\") " pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.282066 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24dc8\" (UniqueName: \"kubernetes.io/projected/aadeb44d-5735-4450-a10f-be5d224dc95b-kube-api-access-24dc8\") pod \"redhat-operators-2q9dx\" (UID: \"aadeb44d-5735-4450-a10f-be5d224dc95b\") " pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.372047 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.904968 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2q9dx"] Dec 09 11:51:41 crc kubenswrapper[4849]: W1209 11:51:41.909211 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaadeb44d_5735_4450_a10f_be5d224dc95b.slice/crio-cfbf271b76d5e6fa62b076fb150c0f3e9cbea57a68f55131f2c3ba6845abf878 WatchSource:0}: Error finding container cfbf271b76d5e6fa62b076fb150c0f3e9cbea57a68f55131f2c3ba6845abf878: Status 404 returned error can't find the container with id cfbf271b76d5e6fa62b076fb150c0f3e9cbea57a68f55131f2c3ba6845abf878 Dec 09 11:51:41 crc kubenswrapper[4849]: I1209 11:51:41.953372 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6"] Dec 09 11:51:41 crc kubenswrapper[4849]: W1209 11:51:41.961316 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode366a1ff_a008_4f60_ba19_c4628338ab7d.slice/crio-fbc82c7d7c43692dd88326f38138ca8f82ccce605d9a4122671228a1228637b6 WatchSource:0}: Error finding container fbc82c7d7c43692dd88326f38138ca8f82ccce605d9a4122671228a1228637b6: Status 404 returned error can't find the container with id fbc82c7d7c43692dd88326f38138ca8f82ccce605d9a4122671228a1228637b6 Dec 09 11:51:42 crc kubenswrapper[4849]: I1209 11:51:42.555704 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" event={"ID":"e366a1ff-a008-4f60-ba19-c4628338ab7d","Type":"ContainerStarted","Data":"fbc82c7d7c43692dd88326f38138ca8f82ccce605d9a4122671228a1228637b6"} Dec 09 11:51:42 crc kubenswrapper[4849]: I1209 11:51:42.558823 4849 generic.go:334] "Generic (PLEG): container finished" podID="aadeb44d-5735-4450-a10f-be5d224dc95b" containerID="7c0940dc8debb7a5c637c6e2154abcd37160cb08a78a6a1eef6b24f3d14dc38a" exitCode=0 Dec 09 11:51:42 crc kubenswrapper[4849]: I1209 11:51:42.558881 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q9dx" event={"ID":"aadeb44d-5735-4450-a10f-be5d224dc95b","Type":"ContainerDied","Data":"7c0940dc8debb7a5c637c6e2154abcd37160cb08a78a6a1eef6b24f3d14dc38a"} Dec 09 11:51:42 crc kubenswrapper[4849]: I1209 11:51:42.558908 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q9dx" event={"ID":"aadeb44d-5735-4450-a10f-be5d224dc95b","Type":"ContainerStarted","Data":"cfbf271b76d5e6fa62b076fb150c0f3e9cbea57a68f55131f2c3ba6845abf878"} Dec 09 11:51:43 crc kubenswrapper[4849]: I1209 11:51:43.571225 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q9dx" event={"ID":"aadeb44d-5735-4450-a10f-be5d224dc95b","Type":"ContainerStarted","Data":"5bdfceadd3ca02fc7499c584228c6fa52cbb887466a6887c013dd9360cc75b34"} Dec 09 11:51:49 crc kubenswrapper[4849]: I1209 11:51:49.668215 4849 generic.go:334] "Generic (PLEG): container finished" podID="aadeb44d-5735-4450-a10f-be5d224dc95b" containerID="5bdfceadd3ca02fc7499c584228c6fa52cbb887466a6887c013dd9360cc75b34" exitCode=0 Dec 09 11:51:49 crc kubenswrapper[4849]: I1209 11:51:49.668277 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q9dx" event={"ID":"aadeb44d-5735-4450-a10f-be5d224dc95b","Type":"ContainerDied","Data":"5bdfceadd3ca02fc7499c584228c6fa52cbb887466a6887c013dd9360cc75b34"} Dec 09 11:51:51 crc kubenswrapper[4849]: I1209 11:51:51.133307 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:51:51 crc kubenswrapper[4849]: I1209 11:51:51.133450 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:51:51 crc kubenswrapper[4849]: I1209 11:51:51.853688 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 09 11:51:52 crc kubenswrapper[4849]: I1209 11:51:52.554554 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 09 11:51:56 crc kubenswrapper[4849]: I1209 11:51:56.789425 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" event={"ID":"e366a1ff-a008-4f60-ba19-c4628338ab7d","Type":"ContainerStarted","Data":"ecbc3df1f22450ff49b9e66da99db3082ecef35d75eee8b2abef18506cc24a2c"} Dec 09 11:51:56 crc kubenswrapper[4849]: I1209 11:51:56.822182 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" podStartSLOduration=2.826649044 podStartE2EDuration="16.822163736s" podCreationTimestamp="2025-12-09 11:51:40 +0000 UTC" firstStartedPulling="2025-12-09 11:51:41.969669229 +0000 UTC m=+1484.509553545" lastFinishedPulling="2025-12-09 11:51:55.965183931 +0000 UTC m=+1498.505068237" observedRunningTime="2025-12-09 11:51:56.809760146 +0000 UTC m=+1499.349644472" watchObservedRunningTime="2025-12-09 11:51:56.822163736 +0000 UTC m=+1499.362048052" Dec 09 11:51:57 crc kubenswrapper[4849]: I1209 11:51:57.802354 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q9dx" event={"ID":"aadeb44d-5735-4450-a10f-be5d224dc95b","Type":"ContainerStarted","Data":"cfcb5aeab854cf100a69abef2bbfc79222a1aa7ba581dbe03ce8c366c99c8ff7"} Dec 09 11:51:57 crc kubenswrapper[4849]: I1209 11:51:57.836309 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2q9dx" podStartSLOduration=2.988792334 podStartE2EDuration="16.836282467s" podCreationTimestamp="2025-12-09 11:51:41 +0000 UTC" firstStartedPulling="2025-12-09 11:51:42.561021195 +0000 UTC m=+1485.100905511" lastFinishedPulling="2025-12-09 11:51:56.408511338 +0000 UTC m=+1498.948395644" observedRunningTime="2025-12-09 11:51:57.825211961 +0000 UTC m=+1500.365096287" watchObservedRunningTime="2025-12-09 11:51:57.836282467 +0000 UTC m=+1500.376166783" Dec 09 11:52:01 crc kubenswrapper[4849]: I1209 11:52:01.373127 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:52:01 crc kubenswrapper[4849]: I1209 11:52:01.373656 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:52:02 crc kubenswrapper[4849]: I1209 11:52:02.424564 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2q9dx" podUID="aadeb44d-5735-4450-a10f-be5d224dc95b" containerName="registry-server" probeResult="failure" output=< Dec 09 11:52:02 crc kubenswrapper[4849]: timeout: failed to connect service ":50051" within 1s Dec 09 11:52:02 crc kubenswrapper[4849]: > Dec 09 11:52:04 crc kubenswrapper[4849]: I1209 11:52:04.797220 4849 scope.go:117] "RemoveContainer" containerID="0b51148149f3594154a79630a3802e503af16038d2fea050f55007307679f39c" Dec 09 11:52:04 crc kubenswrapper[4849]: I1209 11:52:04.833147 4849 scope.go:117] "RemoveContainer" containerID="f0722374ea33d17bded4962684ec2dd05380544139829e16935e9acdf0bfadf9" Dec 09 11:52:04 crc kubenswrapper[4849]: I1209 11:52:04.895083 4849 scope.go:117] "RemoveContainer" containerID="0b0abbb896d4f2a29eadeded67dc3b2b9705c1bee2c164d6e717b4a010e2735b" Dec 09 11:52:08 crc kubenswrapper[4849]: I1209 11:52:08.901330 4849 generic.go:334] "Generic (PLEG): container finished" podID="e366a1ff-a008-4f60-ba19-c4628338ab7d" containerID="ecbc3df1f22450ff49b9e66da99db3082ecef35d75eee8b2abef18506cc24a2c" exitCode=0 Dec 09 11:52:08 crc kubenswrapper[4849]: I1209 11:52:08.901463 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" event={"ID":"e366a1ff-a008-4f60-ba19-c4628338ab7d","Type":"ContainerDied","Data":"ecbc3df1f22450ff49b9e66da99db3082ecef35d75eee8b2abef18506cc24a2c"} Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.281795 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.307133 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-inventory\") pod \"e366a1ff-a008-4f60-ba19-c4628338ab7d\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.307232 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-repo-setup-combined-ca-bundle\") pod \"e366a1ff-a008-4f60-ba19-c4628338ab7d\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.307291 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kllpl\" (UniqueName: \"kubernetes.io/projected/e366a1ff-a008-4f60-ba19-c4628338ab7d-kube-api-access-kllpl\") pod \"e366a1ff-a008-4f60-ba19-c4628338ab7d\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.307317 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-ssh-key\") pod \"e366a1ff-a008-4f60-ba19-c4628338ab7d\" (UID: \"e366a1ff-a008-4f60-ba19-c4628338ab7d\") " Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.324138 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e366a1ff-a008-4f60-ba19-c4628338ab7d" (UID: "e366a1ff-a008-4f60-ba19-c4628338ab7d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.327728 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e366a1ff-a008-4f60-ba19-c4628338ab7d-kube-api-access-kllpl" (OuterVolumeSpecName: "kube-api-access-kllpl") pod "e366a1ff-a008-4f60-ba19-c4628338ab7d" (UID: "e366a1ff-a008-4f60-ba19-c4628338ab7d"). InnerVolumeSpecName "kube-api-access-kllpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.340298 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-inventory" (OuterVolumeSpecName: "inventory") pod "e366a1ff-a008-4f60-ba19-c4628338ab7d" (UID: "e366a1ff-a008-4f60-ba19-c4628338ab7d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.343938 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e366a1ff-a008-4f60-ba19-c4628338ab7d" (UID: "e366a1ff-a008-4f60-ba19-c4628338ab7d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.409228 4849 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.409259 4849 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.409271 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kllpl\" (UniqueName: \"kubernetes.io/projected/e366a1ff-a008-4f60-ba19-c4628338ab7d-kube-api-access-kllpl\") on node \"crc\" DevicePath \"\"" Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.409281 4849 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e366a1ff-a008-4f60-ba19-c4628338ab7d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.922187 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" event={"ID":"e366a1ff-a008-4f60-ba19-c4628338ab7d","Type":"ContainerDied","Data":"fbc82c7d7c43692dd88326f38138ca8f82ccce605d9a4122671228a1228637b6"} Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.922230 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbc82c7d7c43692dd88326f38138ca8f82ccce605d9a4122671228a1228637b6" Dec 09 11:52:10 crc kubenswrapper[4849]: I1209 11:52:10.922286 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.009664 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z"] Dec 09 11:52:11 crc kubenswrapper[4849]: E1209 11:52:11.010109 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e366a1ff-a008-4f60-ba19-c4628338ab7d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.010127 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="e366a1ff-a008-4f60-ba19-c4628338ab7d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.010300 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="e366a1ff-a008-4f60-ba19-c4628338ab7d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.011025 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.013825 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j9nv" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.015610 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.018381 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.018459 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5np5\" (UniqueName: \"kubernetes.io/projected/04376a83-eea2-4010-8403-0852cbf6b7de-kube-api-access-j5np5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.018512 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.018558 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.019871 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.020562 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.030081 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z"] Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.119906 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.119980 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5np5\" (UniqueName: \"kubernetes.io/projected/04376a83-eea2-4010-8403-0852cbf6b7de-kube-api-access-j5np5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.120030 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.120069 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.125095 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.125200 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.136391 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.141077 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5np5\" (UniqueName: \"kubernetes.io/projected/04376a83-eea2-4010-8403-0852cbf6b7de-kube-api-access-j5np5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.329070 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.430112 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.498099 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.924844 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z"] Dec 09 11:52:11 crc kubenswrapper[4849]: I1209 11:52:11.932924 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" event={"ID":"04376a83-eea2-4010-8403-0852cbf6b7de","Type":"ContainerStarted","Data":"97552ce81edbf784654ef2f4a6544ec47d97e626e818313dd1a4e455967fb789"} Dec 09 11:52:12 crc kubenswrapper[4849]: I1209 11:52:12.244155 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2q9dx"] Dec 09 11:52:12 crc kubenswrapper[4849]: I1209 11:52:12.944644 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2q9dx" podUID="aadeb44d-5735-4450-a10f-be5d224dc95b" containerName="registry-server" containerID="cri-o://cfcb5aeab854cf100a69abef2bbfc79222a1aa7ba581dbe03ce8c366c99c8ff7" gracePeriod=2 Dec 09 11:52:12 crc kubenswrapper[4849]: I1209 11:52:12.946039 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" event={"ID":"04376a83-eea2-4010-8403-0852cbf6b7de","Type":"ContainerStarted","Data":"c9e7f1a26f2aaed56dae58699dad2e2685a92ef5cda7994786ec289ca8b79dcb"} Dec 09 11:52:12 crc kubenswrapper[4849]: I1209 11:52:12.970393 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" podStartSLOduration=2.482113903 podStartE2EDuration="2.97037038s" podCreationTimestamp="2025-12-09 11:52:10 +0000 UTC" firstStartedPulling="2025-12-09 11:52:11.927837001 +0000 UTC m=+1514.467721317" lastFinishedPulling="2025-12-09 11:52:12.416093468 +0000 UTC m=+1514.955977794" observedRunningTime="2025-12-09 11:52:12.963679803 +0000 UTC m=+1515.503564139" watchObservedRunningTime="2025-12-09 11:52:12.97037038 +0000 UTC m=+1515.510254696" Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.362959 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.386688 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24dc8\" (UniqueName: \"kubernetes.io/projected/aadeb44d-5735-4450-a10f-be5d224dc95b-kube-api-access-24dc8\") pod \"aadeb44d-5735-4450-a10f-be5d224dc95b\" (UID: \"aadeb44d-5735-4450-a10f-be5d224dc95b\") " Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.386746 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aadeb44d-5735-4450-a10f-be5d224dc95b-utilities\") pod \"aadeb44d-5735-4450-a10f-be5d224dc95b\" (UID: \"aadeb44d-5735-4450-a10f-be5d224dc95b\") " Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.386886 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aadeb44d-5735-4450-a10f-be5d224dc95b-catalog-content\") pod \"aadeb44d-5735-4450-a10f-be5d224dc95b\" (UID: \"aadeb44d-5735-4450-a10f-be5d224dc95b\") " Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.388499 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aadeb44d-5735-4450-a10f-be5d224dc95b-utilities" (OuterVolumeSpecName: "utilities") pod "aadeb44d-5735-4450-a10f-be5d224dc95b" (UID: "aadeb44d-5735-4450-a10f-be5d224dc95b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.394196 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aadeb44d-5735-4450-a10f-be5d224dc95b-kube-api-access-24dc8" (OuterVolumeSpecName: "kube-api-access-24dc8") pod "aadeb44d-5735-4450-a10f-be5d224dc95b" (UID: "aadeb44d-5735-4450-a10f-be5d224dc95b"). InnerVolumeSpecName "kube-api-access-24dc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.489425 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24dc8\" (UniqueName: \"kubernetes.io/projected/aadeb44d-5735-4450-a10f-be5d224dc95b-kube-api-access-24dc8\") on node \"crc\" DevicePath \"\"" Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.489452 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aadeb44d-5735-4450-a10f-be5d224dc95b-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.525458 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aadeb44d-5735-4450-a10f-be5d224dc95b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aadeb44d-5735-4450-a10f-be5d224dc95b" (UID: "aadeb44d-5735-4450-a10f-be5d224dc95b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.591105 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aadeb44d-5735-4450-a10f-be5d224dc95b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.956678 4849 generic.go:334] "Generic (PLEG): container finished" podID="aadeb44d-5735-4450-a10f-be5d224dc95b" containerID="cfcb5aeab854cf100a69abef2bbfc79222a1aa7ba581dbe03ce8c366c99c8ff7" exitCode=0 Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.956724 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q9dx" event={"ID":"aadeb44d-5735-4450-a10f-be5d224dc95b","Type":"ContainerDied","Data":"cfcb5aeab854cf100a69abef2bbfc79222a1aa7ba581dbe03ce8c366c99c8ff7"} Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.957047 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q9dx" event={"ID":"aadeb44d-5735-4450-a10f-be5d224dc95b","Type":"ContainerDied","Data":"cfbf271b76d5e6fa62b076fb150c0f3e9cbea57a68f55131f2c3ba6845abf878"} Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.957078 4849 scope.go:117] "RemoveContainer" containerID="cfcb5aeab854cf100a69abef2bbfc79222a1aa7ba581dbe03ce8c366c99c8ff7" Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.956792 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q9dx" Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.992226 4849 scope.go:117] "RemoveContainer" containerID="5bdfceadd3ca02fc7499c584228c6fa52cbb887466a6887c013dd9360cc75b34" Dec 09 11:52:13 crc kubenswrapper[4849]: I1209 11:52:13.996114 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2q9dx"] Dec 09 11:52:14 crc kubenswrapper[4849]: I1209 11:52:14.014404 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2q9dx"] Dec 09 11:52:14 crc kubenswrapper[4849]: I1209 11:52:14.021805 4849 scope.go:117] "RemoveContainer" containerID="7c0940dc8debb7a5c637c6e2154abcd37160cb08a78a6a1eef6b24f3d14dc38a" Dec 09 11:52:14 crc kubenswrapper[4849]: I1209 11:52:14.061126 4849 scope.go:117] "RemoveContainer" containerID="cfcb5aeab854cf100a69abef2bbfc79222a1aa7ba581dbe03ce8c366c99c8ff7" Dec 09 11:52:14 crc kubenswrapper[4849]: E1209 11:52:14.061747 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfcb5aeab854cf100a69abef2bbfc79222a1aa7ba581dbe03ce8c366c99c8ff7\": container with ID starting with cfcb5aeab854cf100a69abef2bbfc79222a1aa7ba581dbe03ce8c366c99c8ff7 not found: ID does not exist" containerID="cfcb5aeab854cf100a69abef2bbfc79222a1aa7ba581dbe03ce8c366c99c8ff7" Dec 09 11:52:14 crc kubenswrapper[4849]: I1209 11:52:14.061810 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfcb5aeab854cf100a69abef2bbfc79222a1aa7ba581dbe03ce8c366c99c8ff7"} err="failed to get container status \"cfcb5aeab854cf100a69abef2bbfc79222a1aa7ba581dbe03ce8c366c99c8ff7\": rpc error: code = NotFound desc = could not find container \"cfcb5aeab854cf100a69abef2bbfc79222a1aa7ba581dbe03ce8c366c99c8ff7\": container with ID starting with cfcb5aeab854cf100a69abef2bbfc79222a1aa7ba581dbe03ce8c366c99c8ff7 not found: ID does not exist" Dec 09 11:52:14 crc kubenswrapper[4849]: I1209 11:52:14.061842 4849 scope.go:117] "RemoveContainer" containerID="5bdfceadd3ca02fc7499c584228c6fa52cbb887466a6887c013dd9360cc75b34" Dec 09 11:52:14 crc kubenswrapper[4849]: E1209 11:52:14.062236 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bdfceadd3ca02fc7499c584228c6fa52cbb887466a6887c013dd9360cc75b34\": container with ID starting with 5bdfceadd3ca02fc7499c584228c6fa52cbb887466a6887c013dd9360cc75b34 not found: ID does not exist" containerID="5bdfceadd3ca02fc7499c584228c6fa52cbb887466a6887c013dd9360cc75b34" Dec 09 11:52:14 crc kubenswrapper[4849]: I1209 11:52:14.062281 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bdfceadd3ca02fc7499c584228c6fa52cbb887466a6887c013dd9360cc75b34"} err="failed to get container status \"5bdfceadd3ca02fc7499c584228c6fa52cbb887466a6887c013dd9360cc75b34\": rpc error: code = NotFound desc = could not find container \"5bdfceadd3ca02fc7499c584228c6fa52cbb887466a6887c013dd9360cc75b34\": container with ID starting with 5bdfceadd3ca02fc7499c584228c6fa52cbb887466a6887c013dd9360cc75b34 not found: ID does not exist" Dec 09 11:52:14 crc kubenswrapper[4849]: I1209 11:52:14.062316 4849 scope.go:117] "RemoveContainer" containerID="7c0940dc8debb7a5c637c6e2154abcd37160cb08a78a6a1eef6b24f3d14dc38a" Dec 09 11:52:14 crc kubenswrapper[4849]: E1209 11:52:14.062709 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c0940dc8debb7a5c637c6e2154abcd37160cb08a78a6a1eef6b24f3d14dc38a\": container with ID starting with 7c0940dc8debb7a5c637c6e2154abcd37160cb08a78a6a1eef6b24f3d14dc38a not found: ID does not exist" containerID="7c0940dc8debb7a5c637c6e2154abcd37160cb08a78a6a1eef6b24f3d14dc38a" Dec 09 11:52:14 crc kubenswrapper[4849]: I1209 11:52:14.062741 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c0940dc8debb7a5c637c6e2154abcd37160cb08a78a6a1eef6b24f3d14dc38a"} err="failed to get container status \"7c0940dc8debb7a5c637c6e2154abcd37160cb08a78a6a1eef6b24f3d14dc38a\": rpc error: code = NotFound desc = could not find container \"7c0940dc8debb7a5c637c6e2154abcd37160cb08a78a6a1eef6b24f3d14dc38a\": container with ID starting with 7c0940dc8debb7a5c637c6e2154abcd37160cb08a78a6a1eef6b24f3d14dc38a not found: ID does not exist" Dec 09 11:52:14 crc kubenswrapper[4849]: I1209 11:52:14.549376 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aadeb44d-5735-4450-a10f-be5d224dc95b" path="/var/lib/kubelet/pods/aadeb44d-5735-4450-a10f-be5d224dc95b/volumes" Dec 09 11:52:21 crc kubenswrapper[4849]: I1209 11:52:21.132644 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:52:21 crc kubenswrapper[4849]: I1209 11:52:21.133243 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:52:21 crc kubenswrapper[4849]: I1209 11:52:21.133300 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 11:52:21 crc kubenswrapper[4849]: I1209 11:52:21.134092 4849 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7"} pod="openshift-machine-config-operator/machine-config-daemon-89kpx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 11:52:21 crc kubenswrapper[4849]: I1209 11:52:21.134158 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" containerID="cri-o://49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" gracePeriod=600 Dec 09 11:52:21 crc kubenswrapper[4849]: E1209 11:52:21.261928 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:52:22 crc kubenswrapper[4849]: I1209 11:52:22.030945 4849 generic.go:334] "Generic (PLEG): container finished" podID="157c6f6c-042b-4da3-934e-a08474e56486" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" exitCode=0 Dec 09 11:52:22 crc kubenswrapper[4849]: I1209 11:52:22.031011 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerDied","Data":"49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7"} Dec 09 11:52:22 crc kubenswrapper[4849]: I1209 11:52:22.031427 4849 scope.go:117] "RemoveContainer" containerID="b5d0f54890d644c510ae563a6b696f7236880271ca1fac58424d719b2dbb5e99" Dec 09 11:52:22 crc kubenswrapper[4849]: I1209 11:52:22.032249 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:52:22 crc kubenswrapper[4849]: E1209 11:52:22.032629 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:52:36 crc kubenswrapper[4849]: I1209 11:52:36.536205 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:52:36 crc kubenswrapper[4849]: E1209 11:52:36.537094 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:52:48 crc kubenswrapper[4849]: I1209 11:52:48.536849 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:52:48 crc kubenswrapper[4849]: E1209 11:52:48.537702 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:52:50 crc kubenswrapper[4849]: I1209 11:52:50.738284 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dzvkp"] Dec 09 11:52:50 crc kubenswrapper[4849]: E1209 11:52:50.739125 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aadeb44d-5735-4450-a10f-be5d224dc95b" containerName="extract-content" Dec 09 11:52:50 crc kubenswrapper[4849]: I1209 11:52:50.739152 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="aadeb44d-5735-4450-a10f-be5d224dc95b" containerName="extract-content" Dec 09 11:52:50 crc kubenswrapper[4849]: E1209 11:52:50.739175 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aadeb44d-5735-4450-a10f-be5d224dc95b" containerName="extract-utilities" Dec 09 11:52:50 crc kubenswrapper[4849]: I1209 11:52:50.739182 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="aadeb44d-5735-4450-a10f-be5d224dc95b" containerName="extract-utilities" Dec 09 11:52:50 crc kubenswrapper[4849]: E1209 11:52:50.739193 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aadeb44d-5735-4450-a10f-be5d224dc95b" containerName="registry-server" Dec 09 11:52:50 crc kubenswrapper[4849]: I1209 11:52:50.739198 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="aadeb44d-5735-4450-a10f-be5d224dc95b" containerName="registry-server" Dec 09 11:52:50 crc kubenswrapper[4849]: I1209 11:52:50.739474 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="aadeb44d-5735-4450-a10f-be5d224dc95b" containerName="registry-server" Dec 09 11:52:50 crc kubenswrapper[4849]: I1209 11:52:50.742007 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:52:50 crc kubenswrapper[4849]: I1209 11:52:50.755153 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dzvkp"] Dec 09 11:52:50 crc kubenswrapper[4849]: I1209 11:52:50.841950 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-catalog-content\") pod \"certified-operators-dzvkp\" (UID: \"6c2b9213-0f74-4942-a4fe-ac2eb405efe2\") " pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:52:50 crc kubenswrapper[4849]: I1209 11:52:50.842654 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-utilities\") pod \"certified-operators-dzvkp\" (UID: \"6c2b9213-0f74-4942-a4fe-ac2eb405efe2\") " pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:52:50 crc kubenswrapper[4849]: I1209 11:52:50.843347 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvr7f\" (UniqueName: \"kubernetes.io/projected/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-kube-api-access-cvr7f\") pod \"certified-operators-dzvkp\" (UID: \"6c2b9213-0f74-4942-a4fe-ac2eb405efe2\") " pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:52:50 crc kubenswrapper[4849]: I1209 11:52:50.975884 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvr7f\" (UniqueName: \"kubernetes.io/projected/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-kube-api-access-cvr7f\") pod \"certified-operators-dzvkp\" (UID: \"6c2b9213-0f74-4942-a4fe-ac2eb405efe2\") " pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:52:50 crc kubenswrapper[4849]: I1209 11:52:50.975965 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-catalog-content\") pod \"certified-operators-dzvkp\" (UID: \"6c2b9213-0f74-4942-a4fe-ac2eb405efe2\") " pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:52:50 crc kubenswrapper[4849]: I1209 11:52:50.976081 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-utilities\") pod \"certified-operators-dzvkp\" (UID: \"6c2b9213-0f74-4942-a4fe-ac2eb405efe2\") " pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:52:50 crc kubenswrapper[4849]: I1209 11:52:50.976612 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-utilities\") pod \"certified-operators-dzvkp\" (UID: \"6c2b9213-0f74-4942-a4fe-ac2eb405efe2\") " pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:52:50 crc kubenswrapper[4849]: I1209 11:52:50.977290 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-catalog-content\") pod \"certified-operators-dzvkp\" (UID: \"6c2b9213-0f74-4942-a4fe-ac2eb405efe2\") " pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:52:51 crc kubenswrapper[4849]: I1209 11:52:51.004317 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvr7f\" (UniqueName: \"kubernetes.io/projected/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-kube-api-access-cvr7f\") pod \"certified-operators-dzvkp\" (UID: \"6c2b9213-0f74-4942-a4fe-ac2eb405efe2\") " pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:52:51 crc kubenswrapper[4849]: I1209 11:52:51.069248 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:52:51 crc kubenswrapper[4849]: I1209 11:52:51.582883 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dzvkp"] Dec 09 11:52:52 crc kubenswrapper[4849]: I1209 11:52:52.448052 4849 generic.go:334] "Generic (PLEG): container finished" podID="6c2b9213-0f74-4942-a4fe-ac2eb405efe2" containerID="741119cf76b0f553b071deb43d6a153e7fa366089b874aeae3d262b6bbf4408c" exitCode=0 Dec 09 11:52:52 crc kubenswrapper[4849]: I1209 11:52:52.448120 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzvkp" event={"ID":"6c2b9213-0f74-4942-a4fe-ac2eb405efe2","Type":"ContainerDied","Data":"741119cf76b0f553b071deb43d6a153e7fa366089b874aeae3d262b6bbf4408c"} Dec 09 11:52:52 crc kubenswrapper[4849]: I1209 11:52:52.448590 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzvkp" event={"ID":"6c2b9213-0f74-4942-a4fe-ac2eb405efe2","Type":"ContainerStarted","Data":"ba9596bc1bd2cd5db86833eacb3eb19c1e70da1bd773933faa2636a3a0d5dc28"} Dec 09 11:52:53 crc kubenswrapper[4849]: I1209 11:52:53.464638 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzvkp" event={"ID":"6c2b9213-0f74-4942-a4fe-ac2eb405efe2","Type":"ContainerStarted","Data":"a250b68cf2ce7b428a6082a04c69544c279e39dd73f3d080c7dabfa8e1e1fd40"} Dec 09 11:52:54 crc kubenswrapper[4849]: I1209 11:52:54.475376 4849 generic.go:334] "Generic (PLEG): container finished" podID="6c2b9213-0f74-4942-a4fe-ac2eb405efe2" containerID="a250b68cf2ce7b428a6082a04c69544c279e39dd73f3d080c7dabfa8e1e1fd40" exitCode=0 Dec 09 11:52:54 crc kubenswrapper[4849]: I1209 11:52:54.475501 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzvkp" event={"ID":"6c2b9213-0f74-4942-a4fe-ac2eb405efe2","Type":"ContainerDied","Data":"a250b68cf2ce7b428a6082a04c69544c279e39dd73f3d080c7dabfa8e1e1fd40"} Dec 09 11:52:55 crc kubenswrapper[4849]: I1209 11:52:55.062255 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gqjtz"] Dec 09 11:52:55 crc kubenswrapper[4849]: I1209 11:52:55.065085 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:52:55 crc kubenswrapper[4849]: I1209 11:52:55.073645 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gqjtz"] Dec 09 11:52:55 crc kubenswrapper[4849]: I1209 11:52:55.200382 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcwbk\" (UniqueName: \"kubernetes.io/projected/3c5611b9-2e83-438a-be25-9aba7cfe991b-kube-api-access-hcwbk\") pod \"community-operators-gqjtz\" (UID: \"3c5611b9-2e83-438a-be25-9aba7cfe991b\") " pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:52:55 crc kubenswrapper[4849]: I1209 11:52:55.200461 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c5611b9-2e83-438a-be25-9aba7cfe991b-utilities\") pod \"community-operators-gqjtz\" (UID: \"3c5611b9-2e83-438a-be25-9aba7cfe991b\") " pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:52:55 crc kubenswrapper[4849]: I1209 11:52:55.200481 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c5611b9-2e83-438a-be25-9aba7cfe991b-catalog-content\") pod \"community-operators-gqjtz\" (UID: \"3c5611b9-2e83-438a-be25-9aba7cfe991b\") " pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:52:55 crc kubenswrapper[4849]: I1209 11:52:55.302444 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcwbk\" (UniqueName: \"kubernetes.io/projected/3c5611b9-2e83-438a-be25-9aba7cfe991b-kube-api-access-hcwbk\") pod \"community-operators-gqjtz\" (UID: \"3c5611b9-2e83-438a-be25-9aba7cfe991b\") " pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:52:55 crc kubenswrapper[4849]: I1209 11:52:55.302524 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c5611b9-2e83-438a-be25-9aba7cfe991b-utilities\") pod \"community-operators-gqjtz\" (UID: \"3c5611b9-2e83-438a-be25-9aba7cfe991b\") " pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:52:55 crc kubenswrapper[4849]: I1209 11:52:55.302547 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c5611b9-2e83-438a-be25-9aba7cfe991b-catalog-content\") pod \"community-operators-gqjtz\" (UID: \"3c5611b9-2e83-438a-be25-9aba7cfe991b\") " pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:52:55 crc kubenswrapper[4849]: I1209 11:52:55.303108 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c5611b9-2e83-438a-be25-9aba7cfe991b-catalog-content\") pod \"community-operators-gqjtz\" (UID: \"3c5611b9-2e83-438a-be25-9aba7cfe991b\") " pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:52:55 crc kubenswrapper[4849]: I1209 11:52:55.303378 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c5611b9-2e83-438a-be25-9aba7cfe991b-utilities\") pod \"community-operators-gqjtz\" (UID: \"3c5611b9-2e83-438a-be25-9aba7cfe991b\") " pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:52:55 crc kubenswrapper[4849]: I1209 11:52:55.334207 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcwbk\" (UniqueName: \"kubernetes.io/projected/3c5611b9-2e83-438a-be25-9aba7cfe991b-kube-api-access-hcwbk\") pod \"community-operators-gqjtz\" (UID: \"3c5611b9-2e83-438a-be25-9aba7cfe991b\") " pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:52:55 crc kubenswrapper[4849]: I1209 11:52:55.418518 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:52:55 crc kubenswrapper[4849]: I1209 11:52:55.506683 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzvkp" event={"ID":"6c2b9213-0f74-4942-a4fe-ac2eb405efe2","Type":"ContainerStarted","Data":"f798e6ecfa40d2df29d03f868ea0dd7bbd287953253b9bb772d9b0aa61753811"} Dec 09 11:52:55 crc kubenswrapper[4849]: I1209 11:52:55.531878 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dzvkp" podStartSLOduration=3.093744045 podStartE2EDuration="5.531859217s" podCreationTimestamp="2025-12-09 11:52:50 +0000 UTC" firstStartedPulling="2025-12-09 11:52:52.45093657 +0000 UTC m=+1554.990820886" lastFinishedPulling="2025-12-09 11:52:54.889051742 +0000 UTC m=+1557.428936058" observedRunningTime="2025-12-09 11:52:55.527236361 +0000 UTC m=+1558.067120667" watchObservedRunningTime="2025-12-09 11:52:55.531859217 +0000 UTC m=+1558.071743533" Dec 09 11:52:56 crc kubenswrapper[4849]: I1209 11:52:56.139936 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gqjtz"] Dec 09 11:52:56 crc kubenswrapper[4849]: I1209 11:52:56.521233 4849 generic.go:334] "Generic (PLEG): container finished" podID="3c5611b9-2e83-438a-be25-9aba7cfe991b" containerID="151c6fe7570d19a46e62de92a901997e580b1bd766391f4c5313b7bda4ef7d8c" exitCode=0 Dec 09 11:52:56 crc kubenswrapper[4849]: I1209 11:52:56.521284 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqjtz" event={"ID":"3c5611b9-2e83-438a-be25-9aba7cfe991b","Type":"ContainerDied","Data":"151c6fe7570d19a46e62de92a901997e580b1bd766391f4c5313b7bda4ef7d8c"} Dec 09 11:52:56 crc kubenswrapper[4849]: I1209 11:52:56.522241 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqjtz" event={"ID":"3c5611b9-2e83-438a-be25-9aba7cfe991b","Type":"ContainerStarted","Data":"93c93d9ad9378de2f00de86f26be504da8e32459151fd32ba802675e4d0d0f27"} Dec 09 11:52:57 crc kubenswrapper[4849]: I1209 11:52:57.532989 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqjtz" event={"ID":"3c5611b9-2e83-438a-be25-9aba7cfe991b","Type":"ContainerStarted","Data":"3f42b1e5815ddd56ee7a2ee83735829bf2208b77ed25753c823598602810b763"} Dec 09 11:52:59 crc kubenswrapper[4849]: I1209 11:52:59.574150 4849 generic.go:334] "Generic (PLEG): container finished" podID="3c5611b9-2e83-438a-be25-9aba7cfe991b" containerID="3f42b1e5815ddd56ee7a2ee83735829bf2208b77ed25753c823598602810b763" exitCode=0 Dec 09 11:52:59 crc kubenswrapper[4849]: I1209 11:52:59.574241 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqjtz" event={"ID":"3c5611b9-2e83-438a-be25-9aba7cfe991b","Type":"ContainerDied","Data":"3f42b1e5815ddd56ee7a2ee83735829bf2208b77ed25753c823598602810b763"} Dec 09 11:53:01 crc kubenswrapper[4849]: I1209 11:53:01.070373 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:53:01 crc kubenswrapper[4849]: I1209 11:53:01.071034 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:53:01 crc kubenswrapper[4849]: I1209 11:53:01.121626 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:53:01 crc kubenswrapper[4849]: I1209 11:53:01.537093 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:53:01 crc kubenswrapper[4849]: E1209 11:53:01.537452 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:53:01 crc kubenswrapper[4849]: I1209 11:53:01.599428 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqjtz" event={"ID":"3c5611b9-2e83-438a-be25-9aba7cfe991b","Type":"ContainerStarted","Data":"ba8b14cbeb56b7c218cb84fcbd9c1dbe26874484b6748948234e864a0b0d0a6a"} Dec 09 11:53:01 crc kubenswrapper[4849]: I1209 11:53:01.621332 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gqjtz" podStartSLOduration=2.7106555549999998 podStartE2EDuration="6.621301521s" podCreationTimestamp="2025-12-09 11:52:55 +0000 UTC" firstStartedPulling="2025-12-09 11:52:56.52498904 +0000 UTC m=+1559.064873346" lastFinishedPulling="2025-12-09 11:53:00.435634976 +0000 UTC m=+1562.975519312" observedRunningTime="2025-12-09 11:53:01.618205773 +0000 UTC m=+1564.158090089" watchObservedRunningTime="2025-12-09 11:53:01.621301521 +0000 UTC m=+1564.161185847" Dec 09 11:53:01 crc kubenswrapper[4849]: I1209 11:53:01.684387 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:53:02 crc kubenswrapper[4849]: I1209 11:53:02.859011 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dzvkp"] Dec 09 11:53:03 crc kubenswrapper[4849]: I1209 11:53:03.613991 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dzvkp" podUID="6c2b9213-0f74-4942-a4fe-ac2eb405efe2" containerName="registry-server" containerID="cri-o://f798e6ecfa40d2df29d03f868ea0dd7bbd287953253b9bb772d9b0aa61753811" gracePeriod=2 Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.068697 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.222645 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-catalog-content\") pod \"6c2b9213-0f74-4942-a4fe-ac2eb405efe2\" (UID: \"6c2b9213-0f74-4942-a4fe-ac2eb405efe2\") " Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.222847 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvr7f\" (UniqueName: \"kubernetes.io/projected/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-kube-api-access-cvr7f\") pod \"6c2b9213-0f74-4942-a4fe-ac2eb405efe2\" (UID: \"6c2b9213-0f74-4942-a4fe-ac2eb405efe2\") " Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.222990 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-utilities\") pod \"6c2b9213-0f74-4942-a4fe-ac2eb405efe2\" (UID: \"6c2b9213-0f74-4942-a4fe-ac2eb405efe2\") " Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.223621 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-utilities" (OuterVolumeSpecName: "utilities") pod "6c2b9213-0f74-4942-a4fe-ac2eb405efe2" (UID: "6c2b9213-0f74-4942-a4fe-ac2eb405efe2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.223972 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.228119 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-kube-api-access-cvr7f" (OuterVolumeSpecName: "kube-api-access-cvr7f") pod "6c2b9213-0f74-4942-a4fe-ac2eb405efe2" (UID: "6c2b9213-0f74-4942-a4fe-ac2eb405efe2"). InnerVolumeSpecName "kube-api-access-cvr7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.277607 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c2b9213-0f74-4942-a4fe-ac2eb405efe2" (UID: "6c2b9213-0f74-4942-a4fe-ac2eb405efe2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.325266 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.325547 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvr7f\" (UniqueName: \"kubernetes.io/projected/6c2b9213-0f74-4942-a4fe-ac2eb405efe2-kube-api-access-cvr7f\") on node \"crc\" DevicePath \"\"" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.624976 4849 generic.go:334] "Generic (PLEG): container finished" podID="6c2b9213-0f74-4942-a4fe-ac2eb405efe2" containerID="f798e6ecfa40d2df29d03f868ea0dd7bbd287953253b9bb772d9b0aa61753811" exitCode=0 Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.625028 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzvkp" event={"ID":"6c2b9213-0f74-4942-a4fe-ac2eb405efe2","Type":"ContainerDied","Data":"f798e6ecfa40d2df29d03f868ea0dd7bbd287953253b9bb772d9b0aa61753811"} Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.625052 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzvkp" event={"ID":"6c2b9213-0f74-4942-a4fe-ac2eb405efe2","Type":"ContainerDied","Data":"ba9596bc1bd2cd5db86833eacb3eb19c1e70da1bd773933faa2636a3a0d5dc28"} Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.625070 4849 scope.go:117] "RemoveContainer" containerID="f798e6ecfa40d2df29d03f868ea0dd7bbd287953253b9bb772d9b0aa61753811" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.625198 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dzvkp" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.654448 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dzvkp"] Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.654855 4849 scope.go:117] "RemoveContainer" containerID="a250b68cf2ce7b428a6082a04c69544c279e39dd73f3d080c7dabfa8e1e1fd40" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.667858 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dzvkp"] Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.676947 4849 scope.go:117] "RemoveContainer" containerID="741119cf76b0f553b071deb43d6a153e7fa366089b874aeae3d262b6bbf4408c" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.721181 4849 scope.go:117] "RemoveContainer" containerID="f798e6ecfa40d2df29d03f868ea0dd7bbd287953253b9bb772d9b0aa61753811" Dec 09 11:53:04 crc kubenswrapper[4849]: E1209 11:53:04.722278 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f798e6ecfa40d2df29d03f868ea0dd7bbd287953253b9bb772d9b0aa61753811\": container with ID starting with f798e6ecfa40d2df29d03f868ea0dd7bbd287953253b9bb772d9b0aa61753811 not found: ID does not exist" containerID="f798e6ecfa40d2df29d03f868ea0dd7bbd287953253b9bb772d9b0aa61753811" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.722513 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f798e6ecfa40d2df29d03f868ea0dd7bbd287953253b9bb772d9b0aa61753811"} err="failed to get container status \"f798e6ecfa40d2df29d03f868ea0dd7bbd287953253b9bb772d9b0aa61753811\": rpc error: code = NotFound desc = could not find container \"f798e6ecfa40d2df29d03f868ea0dd7bbd287953253b9bb772d9b0aa61753811\": container with ID starting with f798e6ecfa40d2df29d03f868ea0dd7bbd287953253b9bb772d9b0aa61753811 not found: ID does not exist" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.722708 4849 scope.go:117] "RemoveContainer" containerID="a250b68cf2ce7b428a6082a04c69544c279e39dd73f3d080c7dabfa8e1e1fd40" Dec 09 11:53:04 crc kubenswrapper[4849]: E1209 11:53:04.724200 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a250b68cf2ce7b428a6082a04c69544c279e39dd73f3d080c7dabfa8e1e1fd40\": container with ID starting with a250b68cf2ce7b428a6082a04c69544c279e39dd73f3d080c7dabfa8e1e1fd40 not found: ID does not exist" containerID="a250b68cf2ce7b428a6082a04c69544c279e39dd73f3d080c7dabfa8e1e1fd40" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.724260 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a250b68cf2ce7b428a6082a04c69544c279e39dd73f3d080c7dabfa8e1e1fd40"} err="failed to get container status \"a250b68cf2ce7b428a6082a04c69544c279e39dd73f3d080c7dabfa8e1e1fd40\": rpc error: code = NotFound desc = could not find container \"a250b68cf2ce7b428a6082a04c69544c279e39dd73f3d080c7dabfa8e1e1fd40\": container with ID starting with a250b68cf2ce7b428a6082a04c69544c279e39dd73f3d080c7dabfa8e1e1fd40 not found: ID does not exist" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.724289 4849 scope.go:117] "RemoveContainer" containerID="741119cf76b0f553b071deb43d6a153e7fa366089b874aeae3d262b6bbf4408c" Dec 09 11:53:04 crc kubenswrapper[4849]: E1209 11:53:04.725371 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"741119cf76b0f553b071deb43d6a153e7fa366089b874aeae3d262b6bbf4408c\": container with ID starting with 741119cf76b0f553b071deb43d6a153e7fa366089b874aeae3d262b6bbf4408c not found: ID does not exist" containerID="741119cf76b0f553b071deb43d6a153e7fa366089b874aeae3d262b6bbf4408c" Dec 09 11:53:04 crc kubenswrapper[4849]: I1209 11:53:04.725481 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"741119cf76b0f553b071deb43d6a153e7fa366089b874aeae3d262b6bbf4408c"} err="failed to get container status \"741119cf76b0f553b071deb43d6a153e7fa366089b874aeae3d262b6bbf4408c\": rpc error: code = NotFound desc = could not find container \"741119cf76b0f553b071deb43d6a153e7fa366089b874aeae3d262b6bbf4408c\": container with ID starting with 741119cf76b0f553b071deb43d6a153e7fa366089b874aeae3d262b6bbf4408c not found: ID does not exist" Dec 09 11:53:05 crc kubenswrapper[4849]: I1209 11:53:05.011631 4849 scope.go:117] "RemoveContainer" containerID="9a76b8ed763aa01ba5fbec4d9d92a6b47b5579237428e89fc6780928ccf4db97" Dec 09 11:53:05 crc kubenswrapper[4849]: I1209 11:53:05.419357 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:53:05 crc kubenswrapper[4849]: I1209 11:53:05.420240 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:53:05 crc kubenswrapper[4849]: I1209 11:53:05.463208 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:53:05 crc kubenswrapper[4849]: I1209 11:53:05.681878 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:53:06 crc kubenswrapper[4849]: I1209 11:53:06.547471 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c2b9213-0f74-4942-a4fe-ac2eb405efe2" path="/var/lib/kubelet/pods/6c2b9213-0f74-4942-a4fe-ac2eb405efe2/volumes" Dec 09 11:53:07 crc kubenswrapper[4849]: I1209 11:53:07.445953 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gqjtz"] Dec 09 11:53:08 crc kubenswrapper[4849]: I1209 11:53:08.660195 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gqjtz" podUID="3c5611b9-2e83-438a-be25-9aba7cfe991b" containerName="registry-server" containerID="cri-o://ba8b14cbeb56b7c218cb84fcbd9c1dbe26874484b6748948234e864a0b0d0a6a" gracePeriod=2 Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.152398 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.275909 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c5611b9-2e83-438a-be25-9aba7cfe991b-catalog-content\") pod \"3c5611b9-2e83-438a-be25-9aba7cfe991b\" (UID: \"3c5611b9-2e83-438a-be25-9aba7cfe991b\") " Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.276027 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c5611b9-2e83-438a-be25-9aba7cfe991b-utilities\") pod \"3c5611b9-2e83-438a-be25-9aba7cfe991b\" (UID: \"3c5611b9-2e83-438a-be25-9aba7cfe991b\") " Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.276192 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcwbk\" (UniqueName: \"kubernetes.io/projected/3c5611b9-2e83-438a-be25-9aba7cfe991b-kube-api-access-hcwbk\") pod \"3c5611b9-2e83-438a-be25-9aba7cfe991b\" (UID: \"3c5611b9-2e83-438a-be25-9aba7cfe991b\") " Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.276986 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c5611b9-2e83-438a-be25-9aba7cfe991b-utilities" (OuterVolumeSpecName: "utilities") pod "3c5611b9-2e83-438a-be25-9aba7cfe991b" (UID: "3c5611b9-2e83-438a-be25-9aba7cfe991b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.281486 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c5611b9-2e83-438a-be25-9aba7cfe991b-kube-api-access-hcwbk" (OuterVolumeSpecName: "kube-api-access-hcwbk") pod "3c5611b9-2e83-438a-be25-9aba7cfe991b" (UID: "3c5611b9-2e83-438a-be25-9aba7cfe991b"). InnerVolumeSpecName "kube-api-access-hcwbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.333874 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c5611b9-2e83-438a-be25-9aba7cfe991b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c5611b9-2e83-438a-be25-9aba7cfe991b" (UID: "3c5611b9-2e83-438a-be25-9aba7cfe991b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.378437 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c5611b9-2e83-438a-be25-9aba7cfe991b-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.378487 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcwbk\" (UniqueName: \"kubernetes.io/projected/3c5611b9-2e83-438a-be25-9aba7cfe991b-kube-api-access-hcwbk\") on node \"crc\" DevicePath \"\"" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.378502 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c5611b9-2e83-438a-be25-9aba7cfe991b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.668742 4849 generic.go:334] "Generic (PLEG): container finished" podID="3c5611b9-2e83-438a-be25-9aba7cfe991b" containerID="ba8b14cbeb56b7c218cb84fcbd9c1dbe26874484b6748948234e864a0b0d0a6a" exitCode=0 Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.668781 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqjtz" event={"ID":"3c5611b9-2e83-438a-be25-9aba7cfe991b","Type":"ContainerDied","Data":"ba8b14cbeb56b7c218cb84fcbd9c1dbe26874484b6748948234e864a0b0d0a6a"} Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.668831 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqjtz" event={"ID":"3c5611b9-2e83-438a-be25-9aba7cfe991b","Type":"ContainerDied","Data":"93c93d9ad9378de2f00de86f26be504da8e32459151fd32ba802675e4d0d0f27"} Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.668848 4849 scope.go:117] "RemoveContainer" containerID="ba8b14cbeb56b7c218cb84fcbd9c1dbe26874484b6748948234e864a0b0d0a6a" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.668994 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqjtz" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.702102 4849 scope.go:117] "RemoveContainer" containerID="3f42b1e5815ddd56ee7a2ee83735829bf2208b77ed25753c823598602810b763" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.702928 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gqjtz"] Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.710194 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gqjtz"] Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.741173 4849 scope.go:117] "RemoveContainer" containerID="151c6fe7570d19a46e62de92a901997e580b1bd766391f4c5313b7bda4ef7d8c" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.782163 4849 scope.go:117] "RemoveContainer" containerID="ba8b14cbeb56b7c218cb84fcbd9c1dbe26874484b6748948234e864a0b0d0a6a" Dec 09 11:53:09 crc kubenswrapper[4849]: E1209 11:53:09.783012 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba8b14cbeb56b7c218cb84fcbd9c1dbe26874484b6748948234e864a0b0d0a6a\": container with ID starting with ba8b14cbeb56b7c218cb84fcbd9c1dbe26874484b6748948234e864a0b0d0a6a not found: ID does not exist" containerID="ba8b14cbeb56b7c218cb84fcbd9c1dbe26874484b6748948234e864a0b0d0a6a" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.783167 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba8b14cbeb56b7c218cb84fcbd9c1dbe26874484b6748948234e864a0b0d0a6a"} err="failed to get container status \"ba8b14cbeb56b7c218cb84fcbd9c1dbe26874484b6748948234e864a0b0d0a6a\": rpc error: code = NotFound desc = could not find container \"ba8b14cbeb56b7c218cb84fcbd9c1dbe26874484b6748948234e864a0b0d0a6a\": container with ID starting with ba8b14cbeb56b7c218cb84fcbd9c1dbe26874484b6748948234e864a0b0d0a6a not found: ID does not exist" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.783295 4849 scope.go:117] "RemoveContainer" containerID="3f42b1e5815ddd56ee7a2ee83735829bf2208b77ed25753c823598602810b763" Dec 09 11:53:09 crc kubenswrapper[4849]: E1209 11:53:09.783920 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f42b1e5815ddd56ee7a2ee83735829bf2208b77ed25753c823598602810b763\": container with ID starting with 3f42b1e5815ddd56ee7a2ee83735829bf2208b77ed25753c823598602810b763 not found: ID does not exist" containerID="3f42b1e5815ddd56ee7a2ee83735829bf2208b77ed25753c823598602810b763" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.783942 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f42b1e5815ddd56ee7a2ee83735829bf2208b77ed25753c823598602810b763"} err="failed to get container status \"3f42b1e5815ddd56ee7a2ee83735829bf2208b77ed25753c823598602810b763\": rpc error: code = NotFound desc = could not find container \"3f42b1e5815ddd56ee7a2ee83735829bf2208b77ed25753c823598602810b763\": container with ID starting with 3f42b1e5815ddd56ee7a2ee83735829bf2208b77ed25753c823598602810b763 not found: ID does not exist" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.783956 4849 scope.go:117] "RemoveContainer" containerID="151c6fe7570d19a46e62de92a901997e580b1bd766391f4c5313b7bda4ef7d8c" Dec 09 11:53:09 crc kubenswrapper[4849]: E1209 11:53:09.784373 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"151c6fe7570d19a46e62de92a901997e580b1bd766391f4c5313b7bda4ef7d8c\": container with ID starting with 151c6fe7570d19a46e62de92a901997e580b1bd766391f4c5313b7bda4ef7d8c not found: ID does not exist" containerID="151c6fe7570d19a46e62de92a901997e580b1bd766391f4c5313b7bda4ef7d8c" Dec 09 11:53:09 crc kubenswrapper[4849]: I1209 11:53:09.784540 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"151c6fe7570d19a46e62de92a901997e580b1bd766391f4c5313b7bda4ef7d8c"} err="failed to get container status \"151c6fe7570d19a46e62de92a901997e580b1bd766391f4c5313b7bda4ef7d8c\": rpc error: code = NotFound desc = could not find container \"151c6fe7570d19a46e62de92a901997e580b1bd766391f4c5313b7bda4ef7d8c\": container with ID starting with 151c6fe7570d19a46e62de92a901997e580b1bd766391f4c5313b7bda4ef7d8c not found: ID does not exist" Dec 09 11:53:10 crc kubenswrapper[4849]: I1209 11:53:10.546724 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c5611b9-2e83-438a-be25-9aba7cfe991b" path="/var/lib/kubelet/pods/3c5611b9-2e83-438a-be25-9aba7cfe991b/volumes" Dec 09 11:53:15 crc kubenswrapper[4849]: I1209 11:53:15.536103 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:53:15 crc kubenswrapper[4849]: E1209 11:53:15.536920 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:53:29 crc kubenswrapper[4849]: I1209 11:53:29.536363 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:53:29 crc kubenswrapper[4849]: E1209 11:53:29.537165 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:53:43 crc kubenswrapper[4849]: I1209 11:53:43.536906 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:53:43 crc kubenswrapper[4849]: E1209 11:53:43.537905 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:53:56 crc kubenswrapper[4849]: I1209 11:53:56.537452 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:53:56 crc kubenswrapper[4849]: E1209 11:53:56.538681 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:54:05 crc kubenswrapper[4849]: I1209 11:54:05.121719 4849 scope.go:117] "RemoveContainer" containerID="acc0df9a8d7c73e96c591a3f7c327ebca1b724a8c1a017b82d8c2090a1da80f9" Dec 09 11:54:05 crc kubenswrapper[4849]: I1209 11:54:05.153607 4849 scope.go:117] "RemoveContainer" containerID="a58ab147c8c30cc722aed8b5c896295949b3a789433b74c5848c0725f1916b5c" Dec 09 11:54:08 crc kubenswrapper[4849]: I1209 11:54:08.545744 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:54:08 crc kubenswrapper[4849]: E1209 11:54:08.546448 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:54:22 crc kubenswrapper[4849]: I1209 11:54:22.541634 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:54:22 crc kubenswrapper[4849]: E1209 11:54:22.543115 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:54:34 crc kubenswrapper[4849]: I1209 11:54:34.537288 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:54:34 crc kubenswrapper[4849]: E1209 11:54:34.538104 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:54:47 crc kubenswrapper[4849]: I1209 11:54:47.536486 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:54:47 crc kubenswrapper[4849]: E1209 11:54:47.537213 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:55:02 crc kubenswrapper[4849]: I1209 11:55:02.537342 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:55:02 crc kubenswrapper[4849]: E1209 11:55:02.538628 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:55:17 crc kubenswrapper[4849]: I1209 11:55:17.537203 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:55:17 crc kubenswrapper[4849]: E1209 11:55:17.538132 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:55:30 crc kubenswrapper[4849]: I1209 11:55:30.536834 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:55:30 crc kubenswrapper[4849]: E1209 11:55:30.537719 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:55:43 crc kubenswrapper[4849]: I1209 11:55:43.536608 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:55:43 crc kubenswrapper[4849]: E1209 11:55:43.537533 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:55:46 crc kubenswrapper[4849]: I1209 11:55:46.058394 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-fbwzp"] Dec 09 11:55:46 crc kubenswrapper[4849]: I1209 11:55:46.072548 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-fbwzp"] Dec 09 11:55:46 crc kubenswrapper[4849]: I1209 11:55:46.551341 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25daca92-ae6e-4c61-9352-5f84ab7c37ef" path="/var/lib/kubelet/pods/25daca92-ae6e-4c61-9352-5f84ab7c37ef/volumes" Dec 09 11:55:47 crc kubenswrapper[4849]: I1209 11:55:47.028367 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a6c1-account-create-update-j9xp8"] Dec 09 11:55:47 crc kubenswrapper[4849]: I1209 11:55:47.038200 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a6c1-account-create-update-j9xp8"] Dec 09 11:55:48 crc kubenswrapper[4849]: I1209 11:55:48.553080 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96fa319a-9e59-492f-9ca8-bd4277eec701" path="/var/lib/kubelet/pods/96fa319a-9e59-492f-9ca8-bd4277eec701/volumes" Dec 09 11:55:51 crc kubenswrapper[4849]: I1209 11:55:51.035136 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ebfc-account-create-update-2mdbh"] Dec 09 11:55:51 crc kubenswrapper[4849]: I1209 11:55:51.044344 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ebfc-account-create-update-2mdbh"] Dec 09 11:55:52 crc kubenswrapper[4849]: I1209 11:55:52.032502 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mf4zj"] Dec 09 11:55:52 crc kubenswrapper[4849]: I1209 11:55:52.043175 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-60d1-account-create-update-qlzbg"] Dec 09 11:55:52 crc kubenswrapper[4849]: I1209 11:55:52.053722 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-s5889"] Dec 09 11:55:52 crc kubenswrapper[4849]: I1209 11:55:52.065475 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mf4zj"] Dec 09 11:55:52 crc kubenswrapper[4849]: I1209 11:55:52.073799 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-60d1-account-create-update-qlzbg"] Dec 09 11:55:52 crc kubenswrapper[4849]: I1209 11:55:52.081815 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-s5889"] Dec 09 11:55:52 crc kubenswrapper[4849]: I1209 11:55:52.548580 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e9bdc85-b967-4475-a94b-4f4fa6e74c5b" path="/var/lib/kubelet/pods/5e9bdc85-b967-4475-a94b-4f4fa6e74c5b/volumes" Dec 09 11:55:52 crc kubenswrapper[4849]: I1209 11:55:52.549266 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0feb1b2-1589-42f6-824d-431ae417ce09" path="/var/lib/kubelet/pods/e0feb1b2-1589-42f6-824d-431ae417ce09/volumes" Dec 09 11:55:52 crc kubenswrapper[4849]: I1209 11:55:52.549858 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e95ad6f6-4e91-4d3d-b456-36e5a1cc5969" path="/var/lib/kubelet/pods/e95ad6f6-4e91-4d3d-b456-36e5a1cc5969/volumes" Dec 09 11:55:52 crc kubenswrapper[4849]: I1209 11:55:52.550596 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b3e656-264c-42c0-afd9-26b87f3b208e" path="/var/lib/kubelet/pods/f6b3e656-264c-42c0-afd9-26b87f3b208e/volumes" Dec 09 11:55:53 crc kubenswrapper[4849]: I1209 11:55:53.254571 4849 generic.go:334] "Generic (PLEG): container finished" podID="04376a83-eea2-4010-8403-0852cbf6b7de" containerID="c9e7f1a26f2aaed56dae58699dad2e2685a92ef5cda7994786ec289ca8b79dcb" exitCode=0 Dec 09 11:55:53 crc kubenswrapper[4849]: I1209 11:55:53.254650 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" event={"ID":"04376a83-eea2-4010-8403-0852cbf6b7de","Type":"ContainerDied","Data":"c9e7f1a26f2aaed56dae58699dad2e2685a92ef5cda7994786ec289ca8b79dcb"} Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:54.719627 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:54.809505 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-inventory\") pod \"04376a83-eea2-4010-8403-0852cbf6b7de\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:54.809584 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-bootstrap-combined-ca-bundle\") pod \"04376a83-eea2-4010-8403-0852cbf6b7de\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:54.809638 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-ssh-key\") pod \"04376a83-eea2-4010-8403-0852cbf6b7de\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:54.809665 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5np5\" (UniqueName: \"kubernetes.io/projected/04376a83-eea2-4010-8403-0852cbf6b7de-kube-api-access-j5np5\") pod \"04376a83-eea2-4010-8403-0852cbf6b7de\" (UID: \"04376a83-eea2-4010-8403-0852cbf6b7de\") " Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:54.816618 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "04376a83-eea2-4010-8403-0852cbf6b7de" (UID: "04376a83-eea2-4010-8403-0852cbf6b7de"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:54.817986 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04376a83-eea2-4010-8403-0852cbf6b7de-kube-api-access-j5np5" (OuterVolumeSpecName: "kube-api-access-j5np5") pod "04376a83-eea2-4010-8403-0852cbf6b7de" (UID: "04376a83-eea2-4010-8403-0852cbf6b7de"). InnerVolumeSpecName "kube-api-access-j5np5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:54.839723 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-inventory" (OuterVolumeSpecName: "inventory") pod "04376a83-eea2-4010-8403-0852cbf6b7de" (UID: "04376a83-eea2-4010-8403-0852cbf6b7de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:54.840560 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "04376a83-eea2-4010-8403-0852cbf6b7de" (UID: "04376a83-eea2-4010-8403-0852cbf6b7de"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:54.911333 4849 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:54.911364 4849 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:54.911376 4849 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04376a83-eea2-4010-8403-0852cbf6b7de-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:54.911386 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5np5\" (UniqueName: \"kubernetes.io/projected/04376a83-eea2-4010-8403-0852cbf6b7de-kube-api-access-j5np5\") on node \"crc\" DevicePath \"\"" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.274537 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" event={"ID":"04376a83-eea2-4010-8403-0852cbf6b7de","Type":"ContainerDied","Data":"97552ce81edbf784654ef2f4a6544ec47d97e626e818313dd1a4e455967fb789"} Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.274577 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97552ce81edbf784654ef2f4a6544ec47d97e626e818313dd1a4e455967fb789" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.274870 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.462154 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5"] Dec 09 11:55:55 crc kubenswrapper[4849]: E1209 11:55:55.462665 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5611b9-2e83-438a-be25-9aba7cfe991b" containerName="extract-content" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.462701 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5611b9-2e83-438a-be25-9aba7cfe991b" containerName="extract-content" Dec 09 11:55:55 crc kubenswrapper[4849]: E1209 11:55:55.462724 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04376a83-eea2-4010-8403-0852cbf6b7de" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.462734 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="04376a83-eea2-4010-8403-0852cbf6b7de" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 11:55:55 crc kubenswrapper[4849]: E1209 11:55:55.462753 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2b9213-0f74-4942-a4fe-ac2eb405efe2" containerName="extract-utilities" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.462762 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2b9213-0f74-4942-a4fe-ac2eb405efe2" containerName="extract-utilities" Dec 09 11:55:55 crc kubenswrapper[4849]: E1209 11:55:55.462778 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2b9213-0f74-4942-a4fe-ac2eb405efe2" containerName="registry-server" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.462786 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2b9213-0f74-4942-a4fe-ac2eb405efe2" containerName="registry-server" Dec 09 11:55:55 crc kubenswrapper[4849]: E1209 11:55:55.462800 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2b9213-0f74-4942-a4fe-ac2eb405efe2" containerName="extract-content" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.462808 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2b9213-0f74-4942-a4fe-ac2eb405efe2" containerName="extract-content" Dec 09 11:55:55 crc kubenswrapper[4849]: E1209 11:55:55.462845 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5611b9-2e83-438a-be25-9aba7cfe991b" containerName="extract-utilities" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.462853 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5611b9-2e83-438a-be25-9aba7cfe991b" containerName="extract-utilities" Dec 09 11:55:55 crc kubenswrapper[4849]: E1209 11:55:55.462869 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5611b9-2e83-438a-be25-9aba7cfe991b" containerName="registry-server" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.462877 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5611b9-2e83-438a-be25-9aba7cfe991b" containerName="registry-server" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.463112 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="04376a83-eea2-4010-8403-0852cbf6b7de" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.463134 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5611b9-2e83-438a-be25-9aba7cfe991b" containerName="registry-server" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.463154 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2b9213-0f74-4942-a4fe-ac2eb405efe2" containerName="registry-server" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.463885 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.470379 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.470505 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j9nv" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.487709 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.491646 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.507541 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5"] Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.551272 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa008105-59e6-48d8-9b1c-c8d65ad51d31-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5\" (UID: \"fa008105-59e6-48d8-9b1c-c8d65ad51d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.551371 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa008105-59e6-48d8-9b1c-c8d65ad51d31-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5\" (UID: \"fa008105-59e6-48d8-9b1c-c8d65ad51d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.551427 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8h2h\" (UniqueName: \"kubernetes.io/projected/fa008105-59e6-48d8-9b1c-c8d65ad51d31-kube-api-access-j8h2h\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5\" (UID: \"fa008105-59e6-48d8-9b1c-c8d65ad51d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.652536 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa008105-59e6-48d8-9b1c-c8d65ad51d31-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5\" (UID: \"fa008105-59e6-48d8-9b1c-c8d65ad51d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.652625 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa008105-59e6-48d8-9b1c-c8d65ad51d31-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5\" (UID: \"fa008105-59e6-48d8-9b1c-c8d65ad51d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.652654 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8h2h\" (UniqueName: \"kubernetes.io/projected/fa008105-59e6-48d8-9b1c-c8d65ad51d31-kube-api-access-j8h2h\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5\" (UID: \"fa008105-59e6-48d8-9b1c-c8d65ad51d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.657795 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa008105-59e6-48d8-9b1c-c8d65ad51d31-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5\" (UID: \"fa008105-59e6-48d8-9b1c-c8d65ad51d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.657844 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa008105-59e6-48d8-9b1c-c8d65ad51d31-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5\" (UID: \"fa008105-59e6-48d8-9b1c-c8d65ad51d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.701330 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8h2h\" (UniqueName: \"kubernetes.io/projected/fa008105-59e6-48d8-9b1c-c8d65ad51d31-kube-api-access-j8h2h\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5\" (UID: \"fa008105-59e6-48d8-9b1c-c8d65ad51d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" Dec 09 11:55:55 crc kubenswrapper[4849]: I1209 11:55:55.781614 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" Dec 09 11:55:56 crc kubenswrapper[4849]: I1209 11:55:56.364447 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5"] Dec 09 11:55:56 crc kubenswrapper[4849]: I1209 11:55:56.376236 4849 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 11:55:57 crc kubenswrapper[4849]: I1209 11:55:57.294592 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" event={"ID":"fa008105-59e6-48d8-9b1c-c8d65ad51d31","Type":"ContainerStarted","Data":"84aec6e67425e40bdd516871418d8d2716094d5900ecdcb77c26ba5dde29cabf"} Dec 09 11:55:57 crc kubenswrapper[4849]: I1209 11:55:57.294931 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" event={"ID":"fa008105-59e6-48d8-9b1c-c8d65ad51d31","Type":"ContainerStarted","Data":"8a1f2e6491119c08445c2ea1cab0e6f3ffed9630729ec6d4c62e84a80a2b05b3"} Dec 09 11:55:57 crc kubenswrapper[4849]: I1209 11:55:57.333740 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" podStartSLOduration=1.832901251 podStartE2EDuration="2.333715229s" podCreationTimestamp="2025-12-09 11:55:55 +0000 UTC" firstStartedPulling="2025-12-09 11:55:56.376050283 +0000 UTC m=+1738.915934599" lastFinishedPulling="2025-12-09 11:55:56.876864261 +0000 UTC m=+1739.416748577" observedRunningTime="2025-12-09 11:55:57.320727498 +0000 UTC m=+1739.860611834" watchObservedRunningTime="2025-12-09 11:55:57.333715229 +0000 UTC m=+1739.873599555" Dec 09 11:55:58 crc kubenswrapper[4849]: I1209 11:55:58.550905 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:55:58 crc kubenswrapper[4849]: E1209 11:55:58.553389 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:56:05 crc kubenswrapper[4849]: I1209 11:56:05.309821 4849 scope.go:117] "RemoveContainer" containerID="76687cbacdb8f3ed258e74330e46e339a82bb41ac82ec7757c7f79e280017e0e" Dec 09 11:56:05 crc kubenswrapper[4849]: I1209 11:56:05.347360 4849 scope.go:117] "RemoveContainer" containerID="6a07e4fb7c4ec449518866150dae45465fee5533aadb050bb78300745c0deee5" Dec 09 11:56:05 crc kubenswrapper[4849]: I1209 11:56:05.379005 4849 scope.go:117] "RemoveContainer" containerID="46a742bd8a87e91acfafba87ef4385d9741a761b22a02991b9a4dcdb3aa94a00" Dec 09 11:56:05 crc kubenswrapper[4849]: I1209 11:56:05.423641 4849 scope.go:117] "RemoveContainer" containerID="220d95d1ccdb1cfd901238d0dcbd6f0f5b6087914e8082f0735185222ed2da20" Dec 09 11:56:05 crc kubenswrapper[4849]: I1209 11:56:05.469727 4849 scope.go:117] "RemoveContainer" containerID="7a1d9ad0742926ccb19eec63085d4b5a134b57b8b4c36b8d45d79151eafef657" Dec 09 11:56:05 crc kubenswrapper[4849]: I1209 11:56:05.522774 4849 scope.go:117] "RemoveContainer" containerID="e60a9d5f87697469a492827b7403550f3b885f632095e3989e501ee48e90fa97" Dec 09 11:56:13 crc kubenswrapper[4849]: I1209 11:56:13.040221 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-w67cr"] Dec 09 11:56:13 crc kubenswrapper[4849]: I1209 11:56:13.049912 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-w67cr"] Dec 09 11:56:13 crc kubenswrapper[4849]: I1209 11:56:13.536550 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:56:13 crc kubenswrapper[4849]: E1209 11:56:13.536818 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:56:14 crc kubenswrapper[4849]: I1209 11:56:14.549087 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c21ca38-f4ba-44cb-99db-914844f473d0" path="/var/lib/kubelet/pods/6c21ca38-f4ba-44cb-99db-914844f473d0/volumes" Dec 09 11:56:20 crc kubenswrapper[4849]: I1209 11:56:20.026316 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e146-account-create-update-nmmnl"] Dec 09 11:56:20 crc kubenswrapper[4849]: I1209 11:56:20.034150 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e146-account-create-update-nmmnl"] Dec 09 11:56:20 crc kubenswrapper[4849]: I1209 11:56:20.547476 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91a8ccf8-c54d-4a8c-a679-281e06d136da" path="/var/lib/kubelet/pods/91a8ccf8-c54d-4a8c-a679-281e06d136da/volumes" Dec 09 11:56:21 crc kubenswrapper[4849]: I1209 11:56:21.058558 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-xqqlw"] Dec 09 11:56:21 crc kubenswrapper[4849]: I1209 11:56:21.081523 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-4bvc5"] Dec 09 11:56:21 crc kubenswrapper[4849]: I1209 11:56:21.089665 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-4bvc5"] Dec 09 11:56:21 crc kubenswrapper[4849]: I1209 11:56:21.099569 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4994-account-create-update-cghqb"] Dec 09 11:56:21 crc kubenswrapper[4849]: I1209 11:56:21.107569 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-xqqlw"] Dec 09 11:56:21 crc kubenswrapper[4849]: I1209 11:56:21.118245 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4994-account-create-update-cghqb"] Dec 09 11:56:22 crc kubenswrapper[4849]: I1209 11:56:22.548519 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10ecb332-bacf-4550-93d1-2e3cb5f9e3f8" path="/var/lib/kubelet/pods/10ecb332-bacf-4550-93d1-2e3cb5f9e3f8/volumes" Dec 09 11:56:22 crc kubenswrapper[4849]: I1209 11:56:22.549355 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="373b7741-fc4b-4182-8e48-1120d1ba867b" path="/var/lib/kubelet/pods/373b7741-fc4b-4182-8e48-1120d1ba867b/volumes" Dec 09 11:56:22 crc kubenswrapper[4849]: I1209 11:56:22.549934 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="588f68a7-71b1-409a-9abc-ff1e7d6683f9" path="/var/lib/kubelet/pods/588f68a7-71b1-409a-9abc-ff1e7d6683f9/volumes" Dec 09 11:56:25 crc kubenswrapper[4849]: I1209 11:56:25.047051 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-40e7-account-create-update-6hzsv"] Dec 09 11:56:25 crc kubenswrapper[4849]: I1209 11:56:25.053585 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-s8dwb"] Dec 09 11:56:25 crc kubenswrapper[4849]: I1209 11:56:25.068498 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-40e7-account-create-update-6hzsv"] Dec 09 11:56:25 crc kubenswrapper[4849]: I1209 11:56:25.081116 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-s8dwb"] Dec 09 11:56:25 crc kubenswrapper[4849]: I1209 11:56:25.537002 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:56:25 crc kubenswrapper[4849]: E1209 11:56:25.537266 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:56:26 crc kubenswrapper[4849]: I1209 11:56:26.551358 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37e652b8-2c10-4b35-8986-9f3178ff0556" path="/var/lib/kubelet/pods/37e652b8-2c10-4b35-8986-9f3178ff0556/volumes" Dec 09 11:56:26 crc kubenswrapper[4849]: I1209 11:56:26.552521 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83943fe-425d-41b5-80c0-2ab81180e474" path="/var/lib/kubelet/pods/c83943fe-425d-41b5-80c0-2ab81180e474/volumes" Dec 09 11:56:30 crc kubenswrapper[4849]: I1209 11:56:30.030885 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-gjkvp"] Dec 09 11:56:30 crc kubenswrapper[4849]: I1209 11:56:30.039943 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-gjkvp"] Dec 09 11:56:30 crc kubenswrapper[4849]: I1209 11:56:30.553211 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0134bbaa-98fd-401f-96b5-addf0aa2ce7d" path="/var/lib/kubelet/pods/0134bbaa-98fd-401f-96b5-addf0aa2ce7d/volumes" Dec 09 11:56:39 crc kubenswrapper[4849]: I1209 11:56:39.536858 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:56:39 crc kubenswrapper[4849]: E1209 11:56:39.537653 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:56:48 crc kubenswrapper[4849]: I1209 11:56:48.803178 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z8zmt"] Dec 09 11:56:48 crc kubenswrapper[4849]: I1209 11:56:48.805699 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:56:48 crc kubenswrapper[4849]: I1209 11:56:48.819207 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8zmt"] Dec 09 11:56:48 crc kubenswrapper[4849]: I1209 11:56:48.916071 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz2dh\" (UniqueName: \"kubernetes.io/projected/da8791c1-4dbc-4b28-a90e-18845bb52480-kube-api-access-mz2dh\") pod \"redhat-marketplace-z8zmt\" (UID: \"da8791c1-4dbc-4b28-a90e-18845bb52480\") " pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:56:48 crc kubenswrapper[4849]: I1209 11:56:48.916144 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8791c1-4dbc-4b28-a90e-18845bb52480-utilities\") pod \"redhat-marketplace-z8zmt\" (UID: \"da8791c1-4dbc-4b28-a90e-18845bb52480\") " pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:56:48 crc kubenswrapper[4849]: I1209 11:56:48.916231 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8791c1-4dbc-4b28-a90e-18845bb52480-catalog-content\") pod \"redhat-marketplace-z8zmt\" (UID: \"da8791c1-4dbc-4b28-a90e-18845bb52480\") " pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:56:49 crc kubenswrapper[4849]: I1209 11:56:49.017695 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8791c1-4dbc-4b28-a90e-18845bb52480-utilities\") pod \"redhat-marketplace-z8zmt\" (UID: \"da8791c1-4dbc-4b28-a90e-18845bb52480\") " pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:56:49 crc kubenswrapper[4849]: I1209 11:56:49.018005 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8791c1-4dbc-4b28-a90e-18845bb52480-catalog-content\") pod \"redhat-marketplace-z8zmt\" (UID: \"da8791c1-4dbc-4b28-a90e-18845bb52480\") " pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:56:49 crc kubenswrapper[4849]: I1209 11:56:49.018128 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz2dh\" (UniqueName: \"kubernetes.io/projected/da8791c1-4dbc-4b28-a90e-18845bb52480-kube-api-access-mz2dh\") pod \"redhat-marketplace-z8zmt\" (UID: \"da8791c1-4dbc-4b28-a90e-18845bb52480\") " pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:56:49 crc kubenswrapper[4849]: I1209 11:56:49.018238 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8791c1-4dbc-4b28-a90e-18845bb52480-utilities\") pod \"redhat-marketplace-z8zmt\" (UID: \"da8791c1-4dbc-4b28-a90e-18845bb52480\") " pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:56:49 crc kubenswrapper[4849]: I1209 11:56:49.018527 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8791c1-4dbc-4b28-a90e-18845bb52480-catalog-content\") pod \"redhat-marketplace-z8zmt\" (UID: \"da8791c1-4dbc-4b28-a90e-18845bb52480\") " pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:56:49 crc kubenswrapper[4849]: I1209 11:56:49.049267 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz2dh\" (UniqueName: \"kubernetes.io/projected/da8791c1-4dbc-4b28-a90e-18845bb52480-kube-api-access-mz2dh\") pod \"redhat-marketplace-z8zmt\" (UID: \"da8791c1-4dbc-4b28-a90e-18845bb52480\") " pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:56:49 crc kubenswrapper[4849]: I1209 11:56:49.129107 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:56:49 crc kubenswrapper[4849]: I1209 11:56:49.664734 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8zmt"] Dec 09 11:56:49 crc kubenswrapper[4849]: I1209 11:56:49.759499 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8zmt" event={"ID":"da8791c1-4dbc-4b28-a90e-18845bb52480","Type":"ContainerStarted","Data":"dbebb1bf95f29c4909ffcbb15262f230f8b4195862dd6e5efabd750a17ed5287"} Dec 09 11:56:50 crc kubenswrapper[4849]: I1209 11:56:50.538279 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:56:50 crc kubenswrapper[4849]: E1209 11:56:50.538867 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:56:50 crc kubenswrapper[4849]: I1209 11:56:50.774279 4849 generic.go:334] "Generic (PLEG): container finished" podID="da8791c1-4dbc-4b28-a90e-18845bb52480" containerID="2f107ce6fece71b9cf298224edea5cde7b5dc690eba8f4c000361e5fb9e18bd8" exitCode=0 Dec 09 11:56:50 crc kubenswrapper[4849]: I1209 11:56:50.774440 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8zmt" event={"ID":"da8791c1-4dbc-4b28-a90e-18845bb52480","Type":"ContainerDied","Data":"2f107ce6fece71b9cf298224edea5cde7b5dc690eba8f4c000361e5fb9e18bd8"} Dec 09 11:56:52 crc kubenswrapper[4849]: I1209 11:56:52.792523 4849 generic.go:334] "Generic (PLEG): container finished" podID="da8791c1-4dbc-4b28-a90e-18845bb52480" containerID="5583bc0d7d894421265360627390d7ab1bfe68c0d4d1dfd30fc9448eee53e211" exitCode=0 Dec 09 11:56:52 crc kubenswrapper[4849]: I1209 11:56:52.792645 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8zmt" event={"ID":"da8791c1-4dbc-4b28-a90e-18845bb52480","Type":"ContainerDied","Data":"5583bc0d7d894421265360627390d7ab1bfe68c0d4d1dfd30fc9448eee53e211"} Dec 09 11:56:53 crc kubenswrapper[4849]: I1209 11:56:53.804858 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8zmt" event={"ID":"da8791c1-4dbc-4b28-a90e-18845bb52480","Type":"ContainerStarted","Data":"1950ab23d6576e69bb677cb09e689123b00947b963af0243cd4f2e23f4e152f2"} Dec 09 11:56:58 crc kubenswrapper[4849]: I1209 11:56:58.043866 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z8zmt" podStartSLOduration=7.612753072 podStartE2EDuration="10.043842142s" podCreationTimestamp="2025-12-09 11:56:48 +0000 UTC" firstStartedPulling="2025-12-09 11:56:50.776449283 +0000 UTC m=+1793.316333599" lastFinishedPulling="2025-12-09 11:56:53.207538353 +0000 UTC m=+1795.747422669" observedRunningTime="2025-12-09 11:56:53.823949295 +0000 UTC m=+1796.363833631" watchObservedRunningTime="2025-12-09 11:56:58.043842142 +0000 UTC m=+1800.583726458" Dec 09 11:56:58 crc kubenswrapper[4849]: I1209 11:56:58.051903 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-5gjhd"] Dec 09 11:56:58 crc kubenswrapper[4849]: I1209 11:56:58.062345 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-5gjhd"] Dec 09 11:56:58 crc kubenswrapper[4849]: I1209 11:56:58.550902 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c9d5eb2-c2a5-4493-ab04-e8483f1efafe" path="/var/lib/kubelet/pods/2c9d5eb2-c2a5-4493-ab04-e8483f1efafe/volumes" Dec 09 11:56:59 crc kubenswrapper[4849]: I1209 11:56:59.130146 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:56:59 crc kubenswrapper[4849]: I1209 11:56:59.131497 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:56:59 crc kubenswrapper[4849]: I1209 11:56:59.188676 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:56:59 crc kubenswrapper[4849]: I1209 11:56:59.908931 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:56:59 crc kubenswrapper[4849]: I1209 11:56:59.965360 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8zmt"] Dec 09 11:57:01 crc kubenswrapper[4849]: I1209 11:57:01.031088 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-4wpm9"] Dec 09 11:57:01 crc kubenswrapper[4849]: I1209 11:57:01.037809 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-4wpm9"] Dec 09 11:57:01 crc kubenswrapper[4849]: I1209 11:57:01.879475 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z8zmt" podUID="da8791c1-4dbc-4b28-a90e-18845bb52480" containerName="registry-server" containerID="cri-o://1950ab23d6576e69bb677cb09e689123b00947b963af0243cd4f2e23f4e152f2" gracePeriod=2 Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.298135 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.367955 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8791c1-4dbc-4b28-a90e-18845bb52480-utilities\") pod \"da8791c1-4dbc-4b28-a90e-18845bb52480\" (UID: \"da8791c1-4dbc-4b28-a90e-18845bb52480\") " Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.368024 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8791c1-4dbc-4b28-a90e-18845bb52480-catalog-content\") pod \"da8791c1-4dbc-4b28-a90e-18845bb52480\" (UID: \"da8791c1-4dbc-4b28-a90e-18845bb52480\") " Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.368053 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz2dh\" (UniqueName: \"kubernetes.io/projected/da8791c1-4dbc-4b28-a90e-18845bb52480-kube-api-access-mz2dh\") pod \"da8791c1-4dbc-4b28-a90e-18845bb52480\" (UID: \"da8791c1-4dbc-4b28-a90e-18845bb52480\") " Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.369377 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da8791c1-4dbc-4b28-a90e-18845bb52480-utilities" (OuterVolumeSpecName: "utilities") pod "da8791c1-4dbc-4b28-a90e-18845bb52480" (UID: "da8791c1-4dbc-4b28-a90e-18845bb52480"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.374978 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8791c1-4dbc-4b28-a90e-18845bb52480-kube-api-access-mz2dh" (OuterVolumeSpecName: "kube-api-access-mz2dh") pod "da8791c1-4dbc-4b28-a90e-18845bb52480" (UID: "da8791c1-4dbc-4b28-a90e-18845bb52480"). InnerVolumeSpecName "kube-api-access-mz2dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.394566 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da8791c1-4dbc-4b28-a90e-18845bb52480-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da8791c1-4dbc-4b28-a90e-18845bb52480" (UID: "da8791c1-4dbc-4b28-a90e-18845bb52480"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.470502 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8791c1-4dbc-4b28-a90e-18845bb52480-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.470549 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8791c1-4dbc-4b28-a90e-18845bb52480-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.470564 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz2dh\" (UniqueName: \"kubernetes.io/projected/da8791c1-4dbc-4b28-a90e-18845bb52480-kube-api-access-mz2dh\") on node \"crc\" DevicePath \"\"" Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.546940 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9d48847-f667-4f50-b9a1-d68bdf0a63a3" path="/var/lib/kubelet/pods/c9d48847-f667-4f50-b9a1-d68bdf0a63a3/volumes" Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.894090 4849 generic.go:334] "Generic (PLEG): container finished" podID="da8791c1-4dbc-4b28-a90e-18845bb52480" containerID="1950ab23d6576e69bb677cb09e689123b00947b963af0243cd4f2e23f4e152f2" exitCode=0 Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.894177 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8zmt" event={"ID":"da8791c1-4dbc-4b28-a90e-18845bb52480","Type":"ContainerDied","Data":"1950ab23d6576e69bb677cb09e689123b00947b963af0243cd4f2e23f4e152f2"} Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.894209 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8zmt" event={"ID":"da8791c1-4dbc-4b28-a90e-18845bb52480","Type":"ContainerDied","Data":"dbebb1bf95f29c4909ffcbb15262f230f8b4195862dd6e5efabd750a17ed5287"} Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.894230 4849 scope.go:117] "RemoveContainer" containerID="1950ab23d6576e69bb677cb09e689123b00947b963af0243cd4f2e23f4e152f2" Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.894254 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8zmt" Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.939471 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8zmt"] Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.944460 4849 scope.go:117] "RemoveContainer" containerID="5583bc0d7d894421265360627390d7ab1bfe68c0d4d1dfd30fc9448eee53e211" Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.954089 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8zmt"] Dec 09 11:57:02 crc kubenswrapper[4849]: I1209 11:57:02.974792 4849 scope.go:117] "RemoveContainer" containerID="2f107ce6fece71b9cf298224edea5cde7b5dc690eba8f4c000361e5fb9e18bd8" Dec 09 11:57:03 crc kubenswrapper[4849]: I1209 11:57:03.023294 4849 scope.go:117] "RemoveContainer" containerID="1950ab23d6576e69bb677cb09e689123b00947b963af0243cd4f2e23f4e152f2" Dec 09 11:57:03 crc kubenswrapper[4849]: E1209 11:57:03.024157 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1950ab23d6576e69bb677cb09e689123b00947b963af0243cd4f2e23f4e152f2\": container with ID starting with 1950ab23d6576e69bb677cb09e689123b00947b963af0243cd4f2e23f4e152f2 not found: ID does not exist" containerID="1950ab23d6576e69bb677cb09e689123b00947b963af0243cd4f2e23f4e152f2" Dec 09 11:57:03 crc kubenswrapper[4849]: I1209 11:57:03.024192 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1950ab23d6576e69bb677cb09e689123b00947b963af0243cd4f2e23f4e152f2"} err="failed to get container status \"1950ab23d6576e69bb677cb09e689123b00947b963af0243cd4f2e23f4e152f2\": rpc error: code = NotFound desc = could not find container \"1950ab23d6576e69bb677cb09e689123b00947b963af0243cd4f2e23f4e152f2\": container with ID starting with 1950ab23d6576e69bb677cb09e689123b00947b963af0243cd4f2e23f4e152f2 not found: ID does not exist" Dec 09 11:57:03 crc kubenswrapper[4849]: I1209 11:57:03.024220 4849 scope.go:117] "RemoveContainer" containerID="5583bc0d7d894421265360627390d7ab1bfe68c0d4d1dfd30fc9448eee53e211" Dec 09 11:57:03 crc kubenswrapper[4849]: E1209 11:57:03.024725 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5583bc0d7d894421265360627390d7ab1bfe68c0d4d1dfd30fc9448eee53e211\": container with ID starting with 5583bc0d7d894421265360627390d7ab1bfe68c0d4d1dfd30fc9448eee53e211 not found: ID does not exist" containerID="5583bc0d7d894421265360627390d7ab1bfe68c0d4d1dfd30fc9448eee53e211" Dec 09 11:57:03 crc kubenswrapper[4849]: I1209 11:57:03.024748 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5583bc0d7d894421265360627390d7ab1bfe68c0d4d1dfd30fc9448eee53e211"} err="failed to get container status \"5583bc0d7d894421265360627390d7ab1bfe68c0d4d1dfd30fc9448eee53e211\": rpc error: code = NotFound desc = could not find container \"5583bc0d7d894421265360627390d7ab1bfe68c0d4d1dfd30fc9448eee53e211\": container with ID starting with 5583bc0d7d894421265360627390d7ab1bfe68c0d4d1dfd30fc9448eee53e211 not found: ID does not exist" Dec 09 11:57:03 crc kubenswrapper[4849]: I1209 11:57:03.024770 4849 scope.go:117] "RemoveContainer" containerID="2f107ce6fece71b9cf298224edea5cde7b5dc690eba8f4c000361e5fb9e18bd8" Dec 09 11:57:03 crc kubenswrapper[4849]: E1209 11:57:03.025127 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f107ce6fece71b9cf298224edea5cde7b5dc690eba8f4c000361e5fb9e18bd8\": container with ID starting with 2f107ce6fece71b9cf298224edea5cde7b5dc690eba8f4c000361e5fb9e18bd8 not found: ID does not exist" containerID="2f107ce6fece71b9cf298224edea5cde7b5dc690eba8f4c000361e5fb9e18bd8" Dec 09 11:57:03 crc kubenswrapper[4849]: I1209 11:57:03.025215 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f107ce6fece71b9cf298224edea5cde7b5dc690eba8f4c000361e5fb9e18bd8"} err="failed to get container status \"2f107ce6fece71b9cf298224edea5cde7b5dc690eba8f4c000361e5fb9e18bd8\": rpc error: code = NotFound desc = could not find container \"2f107ce6fece71b9cf298224edea5cde7b5dc690eba8f4c000361e5fb9e18bd8\": container with ID starting with 2f107ce6fece71b9cf298224edea5cde7b5dc690eba8f4c000361e5fb9e18bd8 not found: ID does not exist" Dec 09 11:57:04 crc kubenswrapper[4849]: I1209 11:57:04.536807 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:57:04 crc kubenswrapper[4849]: E1209 11:57:04.537632 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:57:04 crc kubenswrapper[4849]: I1209 11:57:04.547337 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da8791c1-4dbc-4b28-a90e-18845bb52480" path="/var/lib/kubelet/pods/da8791c1-4dbc-4b28-a90e-18845bb52480/volumes" Dec 09 11:57:05 crc kubenswrapper[4849]: I1209 11:57:05.707015 4849 scope.go:117] "RemoveContainer" containerID="b528dc5bbc354088f29d6e946d04d92fedb563de87f39d9f2760c6a71675caa5" Dec 09 11:57:05 crc kubenswrapper[4849]: I1209 11:57:05.746644 4849 scope.go:117] "RemoveContainer" containerID="cbb022a85de05d6168155a8fe29307ac0df6f9a396791bb03a2c6d83391e0692" Dec 09 11:57:05 crc kubenswrapper[4849]: I1209 11:57:05.778444 4849 scope.go:117] "RemoveContainer" containerID="ac4f94b6c6e2a145a5339d30f59a9e8bfba7c929483f0d7c5d693d4533522a68" Dec 09 11:57:05 crc kubenswrapper[4849]: I1209 11:57:05.854010 4849 scope.go:117] "RemoveContainer" containerID="c6cfc62c1c5be286a9994626a9907c5e1427610a6f1d6522cf3a8d44fd3d4099" Dec 09 11:57:05 crc kubenswrapper[4849]: I1209 11:57:05.878732 4849 scope.go:117] "RemoveContainer" containerID="576d6927c42e97461923e686de8ef9568980b84c7935cb3adb7eb3ddfbe47f9a" Dec 09 11:57:05 crc kubenswrapper[4849]: I1209 11:57:05.953822 4849 scope.go:117] "RemoveContainer" containerID="9bf0a228e6bde28b69c2717ceab428b83fcef3c698ba9521eb0a56195a936b4a" Dec 09 11:57:05 crc kubenswrapper[4849]: I1209 11:57:05.976018 4849 scope.go:117] "RemoveContainer" containerID="12bb21ef6cc32c55a57dba03d9e27d2fd0d1fe37df84ebb2981b73389864171f" Dec 09 11:57:05 crc kubenswrapper[4849]: I1209 11:57:05.997361 4849 scope.go:117] "RemoveContainer" containerID="ace995a118bf9847ffecf65f8c3e8166ce8cd5c14447f7081e70e9e3353d3289" Dec 09 11:57:06 crc kubenswrapper[4849]: I1209 11:57:06.022314 4849 scope.go:117] "RemoveContainer" containerID="a596edae188ebe7c5fb3747e2e471aee9227fe7538ab2fe110b22e04d0fd65f6" Dec 09 11:57:06 crc kubenswrapper[4849]: I1209 11:57:06.053340 4849 scope.go:117] "RemoveContainer" containerID="46ab55db8f827157cb0cb13cf84f10490fb52f2f479f61d5cf8644805f8d1896" Dec 09 11:57:07 crc kubenswrapper[4849]: I1209 11:57:07.032657 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-7mnkd"] Dec 09 11:57:07 crc kubenswrapper[4849]: I1209 11:57:07.042653 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-7mnkd"] Dec 09 11:57:08 crc kubenswrapper[4849]: I1209 11:57:08.553105 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944" path="/var/lib/kubelet/pods/4c22d7fa-14f8-4afb-9d1e-4fc0ac4d6944/volumes" Dec 09 11:57:12 crc kubenswrapper[4849]: I1209 11:57:12.047858 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tbj8g"] Dec 09 11:57:12 crc kubenswrapper[4849]: I1209 11:57:12.059894 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tbj8g"] Dec 09 11:57:12 crc kubenswrapper[4849]: I1209 11:57:12.551862 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc5d74c-7648-4a3a-a858-dc699a6e0389" path="/var/lib/kubelet/pods/0bc5d74c-7648-4a3a-a858-dc699a6e0389/volumes" Dec 09 11:57:15 crc kubenswrapper[4849]: I1209 11:57:15.035693 4849 generic.go:334] "Generic (PLEG): container finished" podID="fa008105-59e6-48d8-9b1c-c8d65ad51d31" containerID="84aec6e67425e40bdd516871418d8d2716094d5900ecdcb77c26ba5dde29cabf" exitCode=0 Dec 09 11:57:15 crc kubenswrapper[4849]: I1209 11:57:15.035777 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" event={"ID":"fa008105-59e6-48d8-9b1c-c8d65ad51d31","Type":"ContainerDied","Data":"84aec6e67425e40bdd516871418d8d2716094d5900ecdcb77c26ba5dde29cabf"} Dec 09 11:57:16 crc kubenswrapper[4849]: I1209 11:57:16.463697 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" Dec 09 11:57:16 crc kubenswrapper[4849]: I1209 11:57:16.549245 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8h2h\" (UniqueName: \"kubernetes.io/projected/fa008105-59e6-48d8-9b1c-c8d65ad51d31-kube-api-access-j8h2h\") pod \"fa008105-59e6-48d8-9b1c-c8d65ad51d31\" (UID: \"fa008105-59e6-48d8-9b1c-c8d65ad51d31\") " Dec 09 11:57:16 crc kubenswrapper[4849]: I1209 11:57:16.549565 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa008105-59e6-48d8-9b1c-c8d65ad51d31-ssh-key\") pod \"fa008105-59e6-48d8-9b1c-c8d65ad51d31\" (UID: \"fa008105-59e6-48d8-9b1c-c8d65ad51d31\") " Dec 09 11:57:16 crc kubenswrapper[4849]: I1209 11:57:16.549644 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa008105-59e6-48d8-9b1c-c8d65ad51d31-inventory\") pod \"fa008105-59e6-48d8-9b1c-c8d65ad51d31\" (UID: \"fa008105-59e6-48d8-9b1c-c8d65ad51d31\") " Dec 09 11:57:16 crc kubenswrapper[4849]: I1209 11:57:16.556593 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa008105-59e6-48d8-9b1c-c8d65ad51d31-kube-api-access-j8h2h" (OuterVolumeSpecName: "kube-api-access-j8h2h") pod "fa008105-59e6-48d8-9b1c-c8d65ad51d31" (UID: "fa008105-59e6-48d8-9b1c-c8d65ad51d31"). InnerVolumeSpecName "kube-api-access-j8h2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:57:16 crc kubenswrapper[4849]: I1209 11:57:16.573796 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa008105-59e6-48d8-9b1c-c8d65ad51d31-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fa008105-59e6-48d8-9b1c-c8d65ad51d31" (UID: "fa008105-59e6-48d8-9b1c-c8d65ad51d31"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:57:16 crc kubenswrapper[4849]: I1209 11:57:16.581691 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa008105-59e6-48d8-9b1c-c8d65ad51d31-inventory" (OuterVolumeSpecName: "inventory") pod "fa008105-59e6-48d8-9b1c-c8d65ad51d31" (UID: "fa008105-59e6-48d8-9b1c-c8d65ad51d31"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:57:16 crc kubenswrapper[4849]: I1209 11:57:16.652188 4849 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa008105-59e6-48d8-9b1c-c8d65ad51d31-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 11:57:16 crc kubenswrapper[4849]: I1209 11:57:16.652337 4849 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa008105-59e6-48d8-9b1c-c8d65ad51d31-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 11:57:16 crc kubenswrapper[4849]: I1209 11:57:16.652431 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8h2h\" (UniqueName: \"kubernetes.io/projected/fa008105-59e6-48d8-9b1c-c8d65ad51d31-kube-api-access-j8h2h\") on node \"crc\" DevicePath \"\"" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.053517 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" event={"ID":"fa008105-59e6-48d8-9b1c-c8d65ad51d31","Type":"ContainerDied","Data":"8a1f2e6491119c08445c2ea1cab0e6f3ffed9630729ec6d4c62e84a80a2b05b3"} Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.054006 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a1f2e6491119c08445c2ea1cab0e6f3ffed9630729ec6d4c62e84a80a2b05b3" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.053559 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.156528 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz"] Dec 09 11:57:17 crc kubenswrapper[4849]: E1209 11:57:17.157192 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8791c1-4dbc-4b28-a90e-18845bb52480" containerName="registry-server" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.157298 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8791c1-4dbc-4b28-a90e-18845bb52480" containerName="registry-server" Dec 09 11:57:17 crc kubenswrapper[4849]: E1209 11:57:17.157403 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8791c1-4dbc-4b28-a90e-18845bb52480" containerName="extract-utilities" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.157684 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8791c1-4dbc-4b28-a90e-18845bb52480" containerName="extract-utilities" Dec 09 11:57:17 crc kubenswrapper[4849]: E1209 11:57:17.157787 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8791c1-4dbc-4b28-a90e-18845bb52480" containerName="extract-content" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.157849 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8791c1-4dbc-4b28-a90e-18845bb52480" containerName="extract-content" Dec 09 11:57:17 crc kubenswrapper[4849]: E1209 11:57:17.157929 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa008105-59e6-48d8-9b1c-c8d65ad51d31" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.158015 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa008105-59e6-48d8-9b1c-c8d65ad51d31" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.158303 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa008105-59e6-48d8-9b1c-c8d65ad51d31" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.158395 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8791c1-4dbc-4b28-a90e-18845bb52480" containerName="registry-server" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.159293 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.162191 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.162273 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.162470 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.162870 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j9nv" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.181034 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz"] Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.265161 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa05da2f-5d37-4c32-a2c5-e30019999c60-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz\" (UID: \"fa05da2f-5d37-4c32-a2c5-e30019999c60\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.265248 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmndr\" (UniqueName: \"kubernetes.io/projected/fa05da2f-5d37-4c32-a2c5-e30019999c60-kube-api-access-xmndr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz\" (UID: \"fa05da2f-5d37-4c32-a2c5-e30019999c60\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.265506 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa05da2f-5d37-4c32-a2c5-e30019999c60-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz\" (UID: \"fa05da2f-5d37-4c32-a2c5-e30019999c60\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.367644 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmndr\" (UniqueName: \"kubernetes.io/projected/fa05da2f-5d37-4c32-a2c5-e30019999c60-kube-api-access-xmndr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz\" (UID: \"fa05da2f-5d37-4c32-a2c5-e30019999c60\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.368104 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa05da2f-5d37-4c32-a2c5-e30019999c60-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz\" (UID: \"fa05da2f-5d37-4c32-a2c5-e30019999c60\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.368274 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa05da2f-5d37-4c32-a2c5-e30019999c60-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz\" (UID: \"fa05da2f-5d37-4c32-a2c5-e30019999c60\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.379504 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa05da2f-5d37-4c32-a2c5-e30019999c60-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz\" (UID: \"fa05da2f-5d37-4c32-a2c5-e30019999c60\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.379509 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa05da2f-5d37-4c32-a2c5-e30019999c60-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz\" (UID: \"fa05da2f-5d37-4c32-a2c5-e30019999c60\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.385162 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmndr\" (UniqueName: \"kubernetes.io/projected/fa05da2f-5d37-4c32-a2c5-e30019999c60-kube-api-access-xmndr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz\" (UID: \"fa05da2f-5d37-4c32-a2c5-e30019999c60\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.481757 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" Dec 09 11:57:17 crc kubenswrapper[4849]: I1209 11:57:17.537749 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:57:17 crc kubenswrapper[4849]: E1209 11:57:17.537975 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 11:57:18 crc kubenswrapper[4849]: I1209 11:57:18.016712 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz"] Dec 09 11:57:18 crc kubenswrapper[4849]: I1209 11:57:18.066914 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" event={"ID":"fa05da2f-5d37-4c32-a2c5-e30019999c60","Type":"ContainerStarted","Data":"45cc71b735f907dde65927cf5caa325afa4778cf89b191e68ac0a3801974c360"} Dec 09 11:57:19 crc kubenswrapper[4849]: I1209 11:57:19.076960 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" event={"ID":"fa05da2f-5d37-4c32-a2c5-e30019999c60","Type":"ContainerStarted","Data":"f33783cfc9ebd042f7cdca39a3b204e78231b1d4522d4382c163a015ea8894b4"} Dec 09 11:57:19 crc kubenswrapper[4849]: I1209 11:57:19.095730 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" podStartSLOduration=1.671552793 podStartE2EDuration="2.095674198s" podCreationTimestamp="2025-12-09 11:57:17 +0000 UTC" firstStartedPulling="2025-12-09 11:57:18.016988648 +0000 UTC m=+1820.556872964" lastFinishedPulling="2025-12-09 11:57:18.441110053 +0000 UTC m=+1820.980994369" observedRunningTime="2025-12-09 11:57:19.08887368 +0000 UTC m=+1821.628757996" watchObservedRunningTime="2025-12-09 11:57:19.095674198 +0000 UTC m=+1821.635558524" Dec 09 11:57:25 crc kubenswrapper[4849]: I1209 11:57:25.125239 4849 generic.go:334] "Generic (PLEG): container finished" podID="fa05da2f-5d37-4c32-a2c5-e30019999c60" containerID="f33783cfc9ebd042f7cdca39a3b204e78231b1d4522d4382c163a015ea8894b4" exitCode=0 Dec 09 11:57:25 crc kubenswrapper[4849]: I1209 11:57:25.125830 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" event={"ID":"fa05da2f-5d37-4c32-a2c5-e30019999c60","Type":"ContainerDied","Data":"f33783cfc9ebd042f7cdca39a3b204e78231b1d4522d4382c163a015ea8894b4"} Dec 09 11:57:26 crc kubenswrapper[4849]: I1209 11:57:26.589478 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" Dec 09 11:57:26 crc kubenswrapper[4849]: I1209 11:57:26.651531 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa05da2f-5d37-4c32-a2c5-e30019999c60-inventory\") pod \"fa05da2f-5d37-4c32-a2c5-e30019999c60\" (UID: \"fa05da2f-5d37-4c32-a2c5-e30019999c60\") " Dec 09 11:57:26 crc kubenswrapper[4849]: I1209 11:57:26.651661 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa05da2f-5d37-4c32-a2c5-e30019999c60-ssh-key\") pod \"fa05da2f-5d37-4c32-a2c5-e30019999c60\" (UID: \"fa05da2f-5d37-4c32-a2c5-e30019999c60\") " Dec 09 11:57:26 crc kubenswrapper[4849]: I1209 11:57:26.651804 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmndr\" (UniqueName: \"kubernetes.io/projected/fa05da2f-5d37-4c32-a2c5-e30019999c60-kube-api-access-xmndr\") pod \"fa05da2f-5d37-4c32-a2c5-e30019999c60\" (UID: \"fa05da2f-5d37-4c32-a2c5-e30019999c60\") " Dec 09 11:57:26 crc kubenswrapper[4849]: I1209 11:57:26.659119 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa05da2f-5d37-4c32-a2c5-e30019999c60-kube-api-access-xmndr" (OuterVolumeSpecName: "kube-api-access-xmndr") pod "fa05da2f-5d37-4c32-a2c5-e30019999c60" (UID: "fa05da2f-5d37-4c32-a2c5-e30019999c60"). InnerVolumeSpecName "kube-api-access-xmndr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:57:26 crc kubenswrapper[4849]: I1209 11:57:26.682285 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa05da2f-5d37-4c32-a2c5-e30019999c60-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fa05da2f-5d37-4c32-a2c5-e30019999c60" (UID: "fa05da2f-5d37-4c32-a2c5-e30019999c60"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:57:26 crc kubenswrapper[4849]: I1209 11:57:26.683480 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa05da2f-5d37-4c32-a2c5-e30019999c60-inventory" (OuterVolumeSpecName: "inventory") pod "fa05da2f-5d37-4c32-a2c5-e30019999c60" (UID: "fa05da2f-5d37-4c32-a2c5-e30019999c60"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:57:26 crc kubenswrapper[4849]: I1209 11:57:26.753516 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmndr\" (UniqueName: \"kubernetes.io/projected/fa05da2f-5d37-4c32-a2c5-e30019999c60-kube-api-access-xmndr\") on node \"crc\" DevicePath \"\"" Dec 09 11:57:26 crc kubenswrapper[4849]: I1209 11:57:26.753544 4849 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa05da2f-5d37-4c32-a2c5-e30019999c60-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 11:57:26 crc kubenswrapper[4849]: I1209 11:57:26.753553 4849 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa05da2f-5d37-4c32-a2c5-e30019999c60-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.149047 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" event={"ID":"fa05da2f-5d37-4c32-a2c5-e30019999c60","Type":"ContainerDied","Data":"45cc71b735f907dde65927cf5caa325afa4778cf89b191e68ac0a3801974c360"} Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.149097 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45cc71b735f907dde65927cf5caa325afa4778cf89b191e68ac0a3801974c360" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.149163 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.250102 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl"] Dec 09 11:57:27 crc kubenswrapper[4849]: E1209 11:57:27.250514 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa05da2f-5d37-4c32-a2c5-e30019999c60" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.250527 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa05da2f-5d37-4c32-a2c5-e30019999c60" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.250716 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa05da2f-5d37-4c32-a2c5-e30019999c60" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.251305 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.258887 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.259161 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.259354 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j9nv" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.261071 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.267728 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf13f211-fc25-44b6-bdee-e6b92c4102c4-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6shhl\" (UID: \"bf13f211-fc25-44b6-bdee-e6b92c4102c4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.267782 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvnhn\" (UniqueName: \"kubernetes.io/projected/bf13f211-fc25-44b6-bdee-e6b92c4102c4-kube-api-access-dvnhn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6shhl\" (UID: \"bf13f211-fc25-44b6-bdee-e6b92c4102c4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.267853 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf13f211-fc25-44b6-bdee-e6b92c4102c4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6shhl\" (UID: \"bf13f211-fc25-44b6-bdee-e6b92c4102c4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.277808 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl"] Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.370444 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf13f211-fc25-44b6-bdee-e6b92c4102c4-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6shhl\" (UID: \"bf13f211-fc25-44b6-bdee-e6b92c4102c4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.370502 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvnhn\" (UniqueName: \"kubernetes.io/projected/bf13f211-fc25-44b6-bdee-e6b92c4102c4-kube-api-access-dvnhn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6shhl\" (UID: \"bf13f211-fc25-44b6-bdee-e6b92c4102c4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.370537 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf13f211-fc25-44b6-bdee-e6b92c4102c4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6shhl\" (UID: \"bf13f211-fc25-44b6-bdee-e6b92c4102c4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.376739 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf13f211-fc25-44b6-bdee-e6b92c4102c4-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6shhl\" (UID: \"bf13f211-fc25-44b6-bdee-e6b92c4102c4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.379870 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf13f211-fc25-44b6-bdee-e6b92c4102c4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6shhl\" (UID: \"bf13f211-fc25-44b6-bdee-e6b92c4102c4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.392210 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvnhn\" (UniqueName: \"kubernetes.io/projected/bf13f211-fc25-44b6-bdee-e6b92c4102c4-kube-api-access-dvnhn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6shhl\" (UID: \"bf13f211-fc25-44b6-bdee-e6b92c4102c4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" Dec 09 11:57:27 crc kubenswrapper[4849]: I1209 11:57:27.576492 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" Dec 09 11:57:28 crc kubenswrapper[4849]: I1209 11:57:28.096099 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl"] Dec 09 11:57:28 crc kubenswrapper[4849]: I1209 11:57:28.164534 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" event={"ID":"bf13f211-fc25-44b6-bdee-e6b92c4102c4","Type":"ContainerStarted","Data":"de43dc2fc6ae05e473e4507ab5feb1bac3395d68d9bd750720b6e867a3c4b98d"} Dec 09 11:57:29 crc kubenswrapper[4849]: I1209 11:57:29.176050 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" event={"ID":"bf13f211-fc25-44b6-bdee-e6b92c4102c4","Type":"ContainerStarted","Data":"78a6fb2928d0ab1000d6268d7c221b2c7697f0efc8e6a192564c87b8fbd0113b"} Dec 09 11:57:29 crc kubenswrapper[4849]: I1209 11:57:29.198056 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" podStartSLOduration=1.7867699 podStartE2EDuration="2.198033467s" podCreationTimestamp="2025-12-09 11:57:27 +0000 UTC" firstStartedPulling="2025-12-09 11:57:28.097258231 +0000 UTC m=+1830.637142547" lastFinishedPulling="2025-12-09 11:57:28.508521798 +0000 UTC m=+1831.048406114" observedRunningTime="2025-12-09 11:57:29.19452665 +0000 UTC m=+1831.734410976" watchObservedRunningTime="2025-12-09 11:57:29.198033467 +0000 UTC m=+1831.737917783" Dec 09 11:57:31 crc kubenswrapper[4849]: I1209 11:57:31.030351 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-nmgsr"] Dec 09 11:57:31 crc kubenswrapper[4849]: I1209 11:57:31.044781 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-nmgsr"] Dec 09 11:57:32 crc kubenswrapper[4849]: I1209 11:57:32.536909 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 11:57:32 crc kubenswrapper[4849]: I1209 11:57:32.549568 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df8301f3-a405-47fc-b1a8-475daf544079" path="/var/lib/kubelet/pods/df8301f3-a405-47fc-b1a8-475daf544079/volumes" Dec 09 11:57:33 crc kubenswrapper[4849]: I1209 11:57:33.209269 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerStarted","Data":"7c886b97127ff3cfff7eb01a274c621a14da03725270ed7e7327b9be287540ec"} Dec 09 11:58:06 crc kubenswrapper[4849]: I1209 11:58:06.245268 4849 scope.go:117] "RemoveContainer" containerID="95f6d5d6ae0acce5c0a9e51b6358b2217f72e71e3a83695f9b93e3a6826bfcb3" Dec 09 11:58:06 crc kubenswrapper[4849]: I1209 11:58:06.290435 4849 scope.go:117] "RemoveContainer" containerID="2fd9e777f2d8eed9b557fe05e681f4721f7305d52c57341be99a5c250054d1fa" Dec 09 11:58:06 crc kubenswrapper[4849]: I1209 11:58:06.398558 4849 scope.go:117] "RemoveContainer" containerID="2e1be5b125c60b0aba9b126959aadd9e4d47ed2cd5d0da84ff0030d34c9afccc" Dec 09 11:58:11 crc kubenswrapper[4849]: I1209 11:58:11.722517 4849 generic.go:334] "Generic (PLEG): container finished" podID="bf13f211-fc25-44b6-bdee-e6b92c4102c4" containerID="78a6fb2928d0ab1000d6268d7c221b2c7697f0efc8e6a192564c87b8fbd0113b" exitCode=0 Dec 09 11:58:11 crc kubenswrapper[4849]: I1209 11:58:11.722666 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" event={"ID":"bf13f211-fc25-44b6-bdee-e6b92c4102c4","Type":"ContainerDied","Data":"78a6fb2928d0ab1000d6268d7c221b2c7697f0efc8e6a192564c87b8fbd0113b"} Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.168505 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.218601 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf13f211-fc25-44b6-bdee-e6b92c4102c4-ssh-key\") pod \"bf13f211-fc25-44b6-bdee-e6b92c4102c4\" (UID: \"bf13f211-fc25-44b6-bdee-e6b92c4102c4\") " Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.218667 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf13f211-fc25-44b6-bdee-e6b92c4102c4-inventory\") pod \"bf13f211-fc25-44b6-bdee-e6b92c4102c4\" (UID: \"bf13f211-fc25-44b6-bdee-e6b92c4102c4\") " Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.218743 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvnhn\" (UniqueName: \"kubernetes.io/projected/bf13f211-fc25-44b6-bdee-e6b92c4102c4-kube-api-access-dvnhn\") pod \"bf13f211-fc25-44b6-bdee-e6b92c4102c4\" (UID: \"bf13f211-fc25-44b6-bdee-e6b92c4102c4\") " Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.236772 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf13f211-fc25-44b6-bdee-e6b92c4102c4-kube-api-access-dvnhn" (OuterVolumeSpecName: "kube-api-access-dvnhn") pod "bf13f211-fc25-44b6-bdee-e6b92c4102c4" (UID: "bf13f211-fc25-44b6-bdee-e6b92c4102c4"). InnerVolumeSpecName "kube-api-access-dvnhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.246058 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf13f211-fc25-44b6-bdee-e6b92c4102c4-inventory" (OuterVolumeSpecName: "inventory") pod "bf13f211-fc25-44b6-bdee-e6b92c4102c4" (UID: "bf13f211-fc25-44b6-bdee-e6b92c4102c4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.253701 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf13f211-fc25-44b6-bdee-e6b92c4102c4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bf13f211-fc25-44b6-bdee-e6b92c4102c4" (UID: "bf13f211-fc25-44b6-bdee-e6b92c4102c4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.320745 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvnhn\" (UniqueName: \"kubernetes.io/projected/bf13f211-fc25-44b6-bdee-e6b92c4102c4-kube-api-access-dvnhn\") on node \"crc\" DevicePath \"\"" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.320782 4849 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf13f211-fc25-44b6-bdee-e6b92c4102c4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.320796 4849 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf13f211-fc25-44b6-bdee-e6b92c4102c4-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.742796 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" event={"ID":"bf13f211-fc25-44b6-bdee-e6b92c4102c4","Type":"ContainerDied","Data":"de43dc2fc6ae05e473e4507ab5feb1bac3395d68d9bd750720b6e867a3c4b98d"} Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.743169 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de43dc2fc6ae05e473e4507ab5feb1bac3395d68d9bd750720b6e867a3c4b98d" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.743241 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6shhl" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.837749 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk"] Dec 09 11:58:13 crc kubenswrapper[4849]: E1209 11:58:13.838159 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf13f211-fc25-44b6-bdee-e6b92c4102c4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.838177 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf13f211-fc25-44b6-bdee-e6b92c4102c4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.838351 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf13f211-fc25-44b6-bdee-e6b92c4102c4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.839009 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.842756 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j9nv" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.842981 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.843161 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.843337 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 11:58:13 crc kubenswrapper[4849]: I1209 11:58:13.864151 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk"] Dec 09 11:58:14 crc kubenswrapper[4849]: I1209 11:58:14.031764 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc9bff1c-d856-4cab-9c39-19d8106e6a35-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk\" (UID: \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" Dec 09 11:58:14 crc kubenswrapper[4849]: I1209 11:58:14.031855 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5k2w\" (UniqueName: \"kubernetes.io/projected/dc9bff1c-d856-4cab-9c39-19d8106e6a35-kube-api-access-m5k2w\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk\" (UID: \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" Dec 09 11:58:14 crc kubenswrapper[4849]: I1209 11:58:14.032021 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc9bff1c-d856-4cab-9c39-19d8106e6a35-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk\" (UID: \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" Dec 09 11:58:14 crc kubenswrapper[4849]: I1209 11:58:14.134049 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc9bff1c-d856-4cab-9c39-19d8106e6a35-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk\" (UID: \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" Dec 09 11:58:14 crc kubenswrapper[4849]: I1209 11:58:14.134329 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc9bff1c-d856-4cab-9c39-19d8106e6a35-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk\" (UID: \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" Dec 09 11:58:14 crc kubenswrapper[4849]: I1209 11:58:14.134439 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5k2w\" (UniqueName: \"kubernetes.io/projected/dc9bff1c-d856-4cab-9c39-19d8106e6a35-kube-api-access-m5k2w\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk\" (UID: \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" Dec 09 11:58:14 crc kubenswrapper[4849]: I1209 11:58:14.138940 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc9bff1c-d856-4cab-9c39-19d8106e6a35-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk\" (UID: \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" Dec 09 11:58:14 crc kubenswrapper[4849]: I1209 11:58:14.139907 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc9bff1c-d856-4cab-9c39-19d8106e6a35-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk\" (UID: \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" Dec 09 11:58:14 crc kubenswrapper[4849]: I1209 11:58:14.157981 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5k2w\" (UniqueName: \"kubernetes.io/projected/dc9bff1c-d856-4cab-9c39-19d8106e6a35-kube-api-access-m5k2w\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk\" (UID: \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" Dec 09 11:58:14 crc kubenswrapper[4849]: I1209 11:58:14.158500 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" Dec 09 11:58:14 crc kubenswrapper[4849]: I1209 11:58:14.687774 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk"] Dec 09 11:58:14 crc kubenswrapper[4849]: I1209 11:58:14.780428 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" event={"ID":"dc9bff1c-d856-4cab-9c39-19d8106e6a35","Type":"ContainerStarted","Data":"689789f20207433547175c322416ce9c8f641f670f31e9e06dc6046d25fbd8d1"} Dec 09 11:58:15 crc kubenswrapper[4849]: I1209 11:58:15.793714 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" event={"ID":"dc9bff1c-d856-4cab-9c39-19d8106e6a35","Type":"ContainerStarted","Data":"41014c31ef3d7cc2ac6d9a11d6aeae5809a571a0727a7a7d796dd263e05d7343"} Dec 09 11:58:15 crc kubenswrapper[4849]: I1209 11:58:15.817668 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" podStartSLOduration=2.346179613 podStartE2EDuration="2.817646466s" podCreationTimestamp="2025-12-09 11:58:13 +0000 UTC" firstStartedPulling="2025-12-09 11:58:14.694885228 +0000 UTC m=+1877.234769544" lastFinishedPulling="2025-12-09 11:58:15.166352081 +0000 UTC m=+1877.706236397" observedRunningTime="2025-12-09 11:58:15.817441321 +0000 UTC m=+1878.357325637" watchObservedRunningTime="2025-12-09 11:58:15.817646466 +0000 UTC m=+1878.357530782" Dec 09 11:58:19 crc kubenswrapper[4849]: I1209 11:58:19.828535 4849 generic.go:334] "Generic (PLEG): container finished" podID="dc9bff1c-d856-4cab-9c39-19d8106e6a35" containerID="41014c31ef3d7cc2ac6d9a11d6aeae5809a571a0727a7a7d796dd263e05d7343" exitCode=0 Dec 09 11:58:19 crc kubenswrapper[4849]: I1209 11:58:19.828752 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" event={"ID":"dc9bff1c-d856-4cab-9c39-19d8106e6a35","Type":"ContainerDied","Data":"41014c31ef3d7cc2ac6d9a11d6aeae5809a571a0727a7a7d796dd263e05d7343"} Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.297915 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.485564 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc9bff1c-d856-4cab-9c39-19d8106e6a35-ssh-key\") pod \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\" (UID: \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\") " Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.485740 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc9bff1c-d856-4cab-9c39-19d8106e6a35-inventory\") pod \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\" (UID: \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\") " Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.486513 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5k2w\" (UniqueName: \"kubernetes.io/projected/dc9bff1c-d856-4cab-9c39-19d8106e6a35-kube-api-access-m5k2w\") pod \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\" (UID: \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\") " Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.492133 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9bff1c-d856-4cab-9c39-19d8106e6a35-kube-api-access-m5k2w" (OuterVolumeSpecName: "kube-api-access-m5k2w") pod "dc9bff1c-d856-4cab-9c39-19d8106e6a35" (UID: "dc9bff1c-d856-4cab-9c39-19d8106e6a35"). InnerVolumeSpecName "kube-api-access-m5k2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:58:21 crc kubenswrapper[4849]: E1209 11:58:21.516430 4849 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc9bff1c-d856-4cab-9c39-19d8106e6a35-inventory podName:dc9bff1c-d856-4cab-9c39-19d8106e6a35 nodeName:}" failed. No retries permitted until 2025-12-09 11:58:22.016384392 +0000 UTC m=+1884.556268708 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/dc9bff1c-d856-4cab-9c39-19d8106e6a35-inventory") pod "dc9bff1c-d856-4cab-9c39-19d8106e6a35" (UID: "dc9bff1c-d856-4cab-9c39-19d8106e6a35") : error deleting /var/lib/kubelet/pods/dc9bff1c-d856-4cab-9c39-19d8106e6a35/volume-subpaths: remove /var/lib/kubelet/pods/dc9bff1c-d856-4cab-9c39-19d8106e6a35/volume-subpaths: no such file or directory Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.521074 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc9bff1c-d856-4cab-9c39-19d8106e6a35-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dc9bff1c-d856-4cab-9c39-19d8106e6a35" (UID: "dc9bff1c-d856-4cab-9c39-19d8106e6a35"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.588936 4849 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc9bff1c-d856-4cab-9c39-19d8106e6a35-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.588976 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5k2w\" (UniqueName: \"kubernetes.io/projected/dc9bff1c-d856-4cab-9c39-19d8106e6a35-kube-api-access-m5k2w\") on node \"crc\" DevicePath \"\"" Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.845659 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" event={"ID":"dc9bff1c-d856-4cab-9c39-19d8106e6a35","Type":"ContainerDied","Data":"689789f20207433547175c322416ce9c8f641f670f31e9e06dc6046d25fbd8d1"} Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.845983 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="689789f20207433547175c322416ce9c8f641f670f31e9e06dc6046d25fbd8d1" Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.845685 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk" Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.927283 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87"] Dec 09 11:58:21 crc kubenswrapper[4849]: E1209 11:58:21.927986 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9bff1c-d856-4cab-9c39-19d8106e6a35" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.928085 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9bff1c-d856-4cab-9c39-19d8106e6a35" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.928435 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc9bff1c-d856-4cab-9c39-19d8106e6a35" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.929283 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" Dec 09 11:58:21 crc kubenswrapper[4849]: I1209 11:58:21.955037 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87"] Dec 09 11:58:22 crc kubenswrapper[4849]: I1209 11:58:22.095942 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc9bff1c-d856-4cab-9c39-19d8106e6a35-inventory\") pod \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\" (UID: \"dc9bff1c-d856-4cab-9c39-19d8106e6a35\") " Dec 09 11:58:22 crc kubenswrapper[4849]: I1209 11:58:22.096268 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/321f3d04-9b3c-456d-b13c-6db5d42dedb7-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lzn87\" (UID: \"321f3d04-9b3c-456d-b13c-6db5d42dedb7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" Dec 09 11:58:22 crc kubenswrapper[4849]: I1209 11:58:22.096324 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zbcs\" (UniqueName: \"kubernetes.io/projected/321f3d04-9b3c-456d-b13c-6db5d42dedb7-kube-api-access-6zbcs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lzn87\" (UID: \"321f3d04-9b3c-456d-b13c-6db5d42dedb7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" Dec 09 11:58:22 crc kubenswrapper[4849]: I1209 11:58:22.097470 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/321f3d04-9b3c-456d-b13c-6db5d42dedb7-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lzn87\" (UID: \"321f3d04-9b3c-456d-b13c-6db5d42dedb7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" Dec 09 11:58:22 crc kubenswrapper[4849]: I1209 11:58:22.119589 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc9bff1c-d856-4cab-9c39-19d8106e6a35-inventory" (OuterVolumeSpecName: "inventory") pod "dc9bff1c-d856-4cab-9c39-19d8106e6a35" (UID: "dc9bff1c-d856-4cab-9c39-19d8106e6a35"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:58:22 crc kubenswrapper[4849]: I1209 11:58:22.200254 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/321f3d04-9b3c-456d-b13c-6db5d42dedb7-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lzn87\" (UID: \"321f3d04-9b3c-456d-b13c-6db5d42dedb7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" Dec 09 11:58:22 crc kubenswrapper[4849]: I1209 11:58:22.200371 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/321f3d04-9b3c-456d-b13c-6db5d42dedb7-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lzn87\" (UID: \"321f3d04-9b3c-456d-b13c-6db5d42dedb7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" Dec 09 11:58:22 crc kubenswrapper[4849]: I1209 11:58:22.200488 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zbcs\" (UniqueName: \"kubernetes.io/projected/321f3d04-9b3c-456d-b13c-6db5d42dedb7-kube-api-access-6zbcs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lzn87\" (UID: \"321f3d04-9b3c-456d-b13c-6db5d42dedb7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" Dec 09 11:58:22 crc kubenswrapper[4849]: I1209 11:58:22.200725 4849 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc9bff1c-d856-4cab-9c39-19d8106e6a35-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 11:58:22 crc kubenswrapper[4849]: I1209 11:58:22.213449 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/321f3d04-9b3c-456d-b13c-6db5d42dedb7-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lzn87\" (UID: \"321f3d04-9b3c-456d-b13c-6db5d42dedb7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" Dec 09 11:58:22 crc kubenswrapper[4849]: I1209 11:58:22.223357 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/321f3d04-9b3c-456d-b13c-6db5d42dedb7-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lzn87\" (UID: \"321f3d04-9b3c-456d-b13c-6db5d42dedb7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" Dec 09 11:58:22 crc kubenswrapper[4849]: I1209 11:58:22.232375 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zbcs\" (UniqueName: \"kubernetes.io/projected/321f3d04-9b3c-456d-b13c-6db5d42dedb7-kube-api-access-6zbcs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lzn87\" (UID: \"321f3d04-9b3c-456d-b13c-6db5d42dedb7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" Dec 09 11:58:22 crc kubenswrapper[4849]: I1209 11:58:22.252856 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" Dec 09 11:58:22 crc kubenswrapper[4849]: I1209 11:58:22.887592 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87"] Dec 09 11:58:23 crc kubenswrapper[4849]: I1209 11:58:23.865589 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" event={"ID":"321f3d04-9b3c-456d-b13c-6db5d42dedb7","Type":"ContainerStarted","Data":"11c0fa6bbc686d0b2a5e1756e04e95d37ffaabf9eb10a405d1ea5567ee2cb460"} Dec 09 11:58:23 crc kubenswrapper[4849]: I1209 11:58:23.866441 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" event={"ID":"321f3d04-9b3c-456d-b13c-6db5d42dedb7","Type":"ContainerStarted","Data":"3ac64f341c80535a8b76b7a5370f49c1a84bb883132e3ffeae5b665431a345cf"} Dec 09 11:58:23 crc kubenswrapper[4849]: I1209 11:58:23.888211 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" podStartSLOduration=2.292876615 podStartE2EDuration="2.888189416s" podCreationTimestamp="2025-12-09 11:58:21 +0000 UTC" firstStartedPulling="2025-12-09 11:58:22.893642775 +0000 UTC m=+1885.433527091" lastFinishedPulling="2025-12-09 11:58:23.488955576 +0000 UTC m=+1886.028839892" observedRunningTime="2025-12-09 11:58:23.885591561 +0000 UTC m=+1886.425475917" watchObservedRunningTime="2025-12-09 11:58:23.888189416 +0000 UTC m=+1886.428073742" Dec 09 11:58:26 crc kubenswrapper[4849]: I1209 11:58:26.058032 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-vf9pp"] Dec 09 11:58:26 crc kubenswrapper[4849]: I1209 11:58:26.069606 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-vf9pp"] Dec 09 11:58:26 crc kubenswrapper[4849]: I1209 11:58:26.548117 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c7247c3-8d63-4753-969f-dcdb4eea86d1" path="/var/lib/kubelet/pods/6c7247c3-8d63-4753-969f-dcdb4eea86d1/volumes" Dec 09 11:58:27 crc kubenswrapper[4849]: I1209 11:58:27.042734 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-8gtvk"] Dec 09 11:58:27 crc kubenswrapper[4849]: I1209 11:58:27.050996 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-bkvhb"] Dec 09 11:58:27 crc kubenswrapper[4849]: I1209 11:58:27.058604 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-618a-account-create-update-j7swh"] Dec 09 11:58:27 crc kubenswrapper[4849]: I1209 11:58:27.071160 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-8gtvk"] Dec 09 11:58:27 crc kubenswrapper[4849]: I1209 11:58:27.079364 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3150-account-create-update-zd6b9"] Dec 09 11:58:27 crc kubenswrapper[4849]: I1209 11:58:27.093338 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5458-account-create-update-cn6mz"] Dec 09 11:58:27 crc kubenswrapper[4849]: I1209 11:58:27.131868 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-618a-account-create-update-j7swh"] Dec 09 11:58:27 crc kubenswrapper[4849]: I1209 11:58:27.156239 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-bkvhb"] Dec 09 11:58:27 crc kubenswrapper[4849]: I1209 11:58:27.164866 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3150-account-create-update-zd6b9"] Dec 09 11:58:27 crc kubenswrapper[4849]: I1209 11:58:27.173166 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-5458-account-create-update-cn6mz"] Dec 09 11:58:28 crc kubenswrapper[4849]: I1209 11:58:28.549834 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc" path="/var/lib/kubelet/pods/2a588afb-8e8a-4b60-96f7-7d24d4b5a5fc/volumes" Dec 09 11:58:28 crc kubenswrapper[4849]: I1209 11:58:28.551617 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73e98d76-209b-48ef-bb19-c996e1fd5fbb" path="/var/lib/kubelet/pods/73e98d76-209b-48ef-bb19-c996e1fd5fbb/volumes" Dec 09 11:58:28 crc kubenswrapper[4849]: I1209 11:58:28.552622 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cfdbceb-1c2b-482d-b331-06c839bee145" path="/var/lib/kubelet/pods/7cfdbceb-1c2b-482d-b331-06c839bee145/volumes" Dec 09 11:58:28 crc kubenswrapper[4849]: I1209 11:58:28.553236 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0b7c503-fd52-4bc9-88f0-a37f6346916d" path="/var/lib/kubelet/pods/f0b7c503-fd52-4bc9-88f0-a37f6346916d/volumes" Dec 09 11:58:28 crc kubenswrapper[4849]: I1209 11:58:28.554310 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f696cb20-6d6a-467c-9963-4ea7bd4bb894" path="/var/lib/kubelet/pods/f696cb20-6d6a-467c-9963-4ea7bd4bb894/volumes" Dec 09 11:58:59 crc kubenswrapper[4849]: I1209 11:58:59.050584 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-654nd"] Dec 09 11:58:59 crc kubenswrapper[4849]: I1209 11:58:59.065113 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-654nd"] Dec 09 11:59:00 crc kubenswrapper[4849]: I1209 11:59:00.552858 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24f3ea3b-6680-4bc0-a6af-31fc894664ca" path="/var/lib/kubelet/pods/24f3ea3b-6680-4bc0-a6af-31fc894664ca/volumes" Dec 09 11:59:06 crc kubenswrapper[4849]: I1209 11:59:06.515835 4849 scope.go:117] "RemoveContainer" containerID="d582ebb6e31921bcf131f4b05a33be85e870df2f817384f1420d0f473af54344" Dec 09 11:59:06 crc kubenswrapper[4849]: I1209 11:59:06.544167 4849 scope.go:117] "RemoveContainer" containerID="fefbca211338881f99160d122884540c7dae45725114e94225498b6661dcdbe4" Dec 09 11:59:06 crc kubenswrapper[4849]: I1209 11:59:06.617395 4849 scope.go:117] "RemoveContainer" containerID="0e6155a514ff9498bc70089ea56f264c6c2822bbb855cf7be274cc45e371b6a3" Dec 09 11:59:06 crc kubenswrapper[4849]: I1209 11:59:06.652809 4849 scope.go:117] "RemoveContainer" containerID="c0a0105bbc82ccab06d746d48a023c556ea2cf970f5890a44f0063a6bbd35976" Dec 09 11:59:06 crc kubenswrapper[4849]: I1209 11:59:06.690722 4849 scope.go:117] "RemoveContainer" containerID="b9572446e566ec8d37d8f67b8456a5ddc62d4fb20e747ed110f4fa2c5f0705d3" Dec 09 11:59:06 crc kubenswrapper[4849]: I1209 11:59:06.736560 4849 scope.go:117] "RemoveContainer" containerID="7d34ef63bef99a54b16c4c8d3760fc06a08a6ee985371be78f3993b905e05fd3" Dec 09 11:59:06 crc kubenswrapper[4849]: I1209 11:59:06.793769 4849 scope.go:117] "RemoveContainer" containerID="f072917b3f014061fe718fbe99e09e6813185b1c406a8b2f83eac9d4fa8dc52c" Dec 09 11:59:21 crc kubenswrapper[4849]: I1209 11:59:21.352548 4849 generic.go:334] "Generic (PLEG): container finished" podID="321f3d04-9b3c-456d-b13c-6db5d42dedb7" containerID="11c0fa6bbc686d0b2a5e1756e04e95d37ffaabf9eb10a405d1ea5567ee2cb460" exitCode=0 Dec 09 11:59:21 crc kubenswrapper[4849]: I1209 11:59:21.352611 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" event={"ID":"321f3d04-9b3c-456d-b13c-6db5d42dedb7","Type":"ContainerDied","Data":"11c0fa6bbc686d0b2a5e1756e04e95d37ffaabf9eb10a405d1ea5567ee2cb460"} Dec 09 11:59:22 crc kubenswrapper[4849]: I1209 11:59:22.802265 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" Dec 09 11:59:22 crc kubenswrapper[4849]: I1209 11:59:22.872992 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/321f3d04-9b3c-456d-b13c-6db5d42dedb7-inventory\") pod \"321f3d04-9b3c-456d-b13c-6db5d42dedb7\" (UID: \"321f3d04-9b3c-456d-b13c-6db5d42dedb7\") " Dec 09 11:59:22 crc kubenswrapper[4849]: I1209 11:59:22.873065 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zbcs\" (UniqueName: \"kubernetes.io/projected/321f3d04-9b3c-456d-b13c-6db5d42dedb7-kube-api-access-6zbcs\") pod \"321f3d04-9b3c-456d-b13c-6db5d42dedb7\" (UID: \"321f3d04-9b3c-456d-b13c-6db5d42dedb7\") " Dec 09 11:59:22 crc kubenswrapper[4849]: I1209 11:59:22.873123 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/321f3d04-9b3c-456d-b13c-6db5d42dedb7-ssh-key\") pod \"321f3d04-9b3c-456d-b13c-6db5d42dedb7\" (UID: \"321f3d04-9b3c-456d-b13c-6db5d42dedb7\") " Dec 09 11:59:22 crc kubenswrapper[4849]: I1209 11:59:22.882868 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/321f3d04-9b3c-456d-b13c-6db5d42dedb7-kube-api-access-6zbcs" (OuterVolumeSpecName: "kube-api-access-6zbcs") pod "321f3d04-9b3c-456d-b13c-6db5d42dedb7" (UID: "321f3d04-9b3c-456d-b13c-6db5d42dedb7"). InnerVolumeSpecName "kube-api-access-6zbcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:59:22 crc kubenswrapper[4849]: I1209 11:59:22.903629 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321f3d04-9b3c-456d-b13c-6db5d42dedb7-inventory" (OuterVolumeSpecName: "inventory") pod "321f3d04-9b3c-456d-b13c-6db5d42dedb7" (UID: "321f3d04-9b3c-456d-b13c-6db5d42dedb7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:59:22 crc kubenswrapper[4849]: I1209 11:59:22.910452 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321f3d04-9b3c-456d-b13c-6db5d42dedb7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "321f3d04-9b3c-456d-b13c-6db5d42dedb7" (UID: "321f3d04-9b3c-456d-b13c-6db5d42dedb7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:59:22 crc kubenswrapper[4849]: I1209 11:59:22.975227 4849 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/321f3d04-9b3c-456d-b13c-6db5d42dedb7-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 11:59:22 crc kubenswrapper[4849]: I1209 11:59:22.975277 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zbcs\" (UniqueName: \"kubernetes.io/projected/321f3d04-9b3c-456d-b13c-6db5d42dedb7-kube-api-access-6zbcs\") on node \"crc\" DevicePath \"\"" Dec 09 11:59:22 crc kubenswrapper[4849]: I1209 11:59:22.975290 4849 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/321f3d04-9b3c-456d-b13c-6db5d42dedb7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.372155 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" event={"ID":"321f3d04-9b3c-456d-b13c-6db5d42dedb7","Type":"ContainerDied","Data":"3ac64f341c80535a8b76b7a5370f49c1a84bb883132e3ffeae5b665431a345cf"} Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.372198 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ac64f341c80535a8b76b7a5370f49c1a84bb883132e3ffeae5b665431a345cf" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.372218 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lzn87" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.588550 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sdzhv"] Dec 09 11:59:23 crc kubenswrapper[4849]: E1209 11:59:23.589500 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321f3d04-9b3c-456d-b13c-6db5d42dedb7" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.589526 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="321f3d04-9b3c-456d-b13c-6db5d42dedb7" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.589750 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="321f3d04-9b3c-456d-b13c-6db5d42dedb7" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.590643 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.593721 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j9nv" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.593879 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.593979 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.594123 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.607469 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sdzhv"] Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.686899 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6393e5b3-a774-4273-8306-333ba2fb51ac-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sdzhv\" (UID: \"6393e5b3-a774-4273-8306-333ba2fb51ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.687020 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6393e5b3-a774-4273-8306-333ba2fb51ac-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sdzhv\" (UID: \"6393e5b3-a774-4273-8306-333ba2fb51ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.688021 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm67j\" (UniqueName: \"kubernetes.io/projected/6393e5b3-a774-4273-8306-333ba2fb51ac-kube-api-access-jm67j\") pod \"ssh-known-hosts-edpm-deployment-sdzhv\" (UID: \"6393e5b3-a774-4273-8306-333ba2fb51ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.790519 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6393e5b3-a774-4273-8306-333ba2fb51ac-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sdzhv\" (UID: \"6393e5b3-a774-4273-8306-333ba2fb51ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.790671 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm67j\" (UniqueName: \"kubernetes.io/projected/6393e5b3-a774-4273-8306-333ba2fb51ac-kube-api-access-jm67j\") pod \"ssh-known-hosts-edpm-deployment-sdzhv\" (UID: \"6393e5b3-a774-4273-8306-333ba2fb51ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.790808 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6393e5b3-a774-4273-8306-333ba2fb51ac-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sdzhv\" (UID: \"6393e5b3-a774-4273-8306-333ba2fb51ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.799178 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6393e5b3-a774-4273-8306-333ba2fb51ac-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sdzhv\" (UID: \"6393e5b3-a774-4273-8306-333ba2fb51ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.800197 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6393e5b3-a774-4273-8306-333ba2fb51ac-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sdzhv\" (UID: \"6393e5b3-a774-4273-8306-333ba2fb51ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.811966 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm67j\" (UniqueName: \"kubernetes.io/projected/6393e5b3-a774-4273-8306-333ba2fb51ac-kube-api-access-jm67j\") pod \"ssh-known-hosts-edpm-deployment-sdzhv\" (UID: \"6393e5b3-a774-4273-8306-333ba2fb51ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" Dec 09 11:59:23 crc kubenswrapper[4849]: I1209 11:59:23.916216 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" Dec 09 11:59:24 crc kubenswrapper[4849]: I1209 11:59:24.460192 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sdzhv"] Dec 09 11:59:24 crc kubenswrapper[4849]: W1209 11:59:24.479237 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6393e5b3_a774_4273_8306_333ba2fb51ac.slice/crio-3301d6f696871cd7885e12d8f67ef827858b64939febdba76a0a337bbfd2e211 WatchSource:0}: Error finding container 3301d6f696871cd7885e12d8f67ef827858b64939febdba76a0a337bbfd2e211: Status 404 returned error can't find the container with id 3301d6f696871cd7885e12d8f67ef827858b64939febdba76a0a337bbfd2e211 Dec 09 11:59:25 crc kubenswrapper[4849]: I1209 11:59:25.391603 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" event={"ID":"6393e5b3-a774-4273-8306-333ba2fb51ac","Type":"ContainerStarted","Data":"49fe8320e7eba465439a3399baa6bb3be5c37d03a0ce1bf95a7a377f1b5c11f3"} Dec 09 11:59:25 crc kubenswrapper[4849]: I1209 11:59:25.391981 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" event={"ID":"6393e5b3-a774-4273-8306-333ba2fb51ac","Type":"ContainerStarted","Data":"3301d6f696871cd7885e12d8f67ef827858b64939febdba76a0a337bbfd2e211"} Dec 09 11:59:25 crc kubenswrapper[4849]: I1209 11:59:25.414864 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" podStartSLOduration=1.9511487349999999 podStartE2EDuration="2.414833894s" podCreationTimestamp="2025-12-09 11:59:23 +0000 UTC" firstStartedPulling="2025-12-09 11:59:24.483047788 +0000 UTC m=+1947.022932104" lastFinishedPulling="2025-12-09 11:59:24.946732947 +0000 UTC m=+1947.486617263" observedRunningTime="2025-12-09 11:59:25.414752302 +0000 UTC m=+1947.954636618" watchObservedRunningTime="2025-12-09 11:59:25.414833894 +0000 UTC m=+1947.954718210" Dec 09 11:59:29 crc kubenswrapper[4849]: I1209 11:59:29.079823 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7qvhg"] Dec 09 11:59:29 crc kubenswrapper[4849]: I1209 11:59:29.089202 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nrffl"] Dec 09 11:59:29 crc kubenswrapper[4849]: I1209 11:59:29.097943 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7qvhg"] Dec 09 11:59:29 crc kubenswrapper[4849]: I1209 11:59:29.126047 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nrffl"] Dec 09 11:59:30 crc kubenswrapper[4849]: I1209 11:59:30.549645 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a968b26-11b2-421b-89bc-d481ce7ebe0a" path="/var/lib/kubelet/pods/2a968b26-11b2-421b-89bc-d481ce7ebe0a/volumes" Dec 09 11:59:30 crc kubenswrapper[4849]: I1209 11:59:30.551149 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db78fd7e-e02f-4ffa-9c38-675b7b021cc7" path="/var/lib/kubelet/pods/db78fd7e-e02f-4ffa-9c38-675b7b021cc7/volumes" Dec 09 11:59:33 crc kubenswrapper[4849]: I1209 11:59:33.456088 4849 generic.go:334] "Generic (PLEG): container finished" podID="6393e5b3-a774-4273-8306-333ba2fb51ac" containerID="49fe8320e7eba465439a3399baa6bb3be5c37d03a0ce1bf95a7a377f1b5c11f3" exitCode=0 Dec 09 11:59:33 crc kubenswrapper[4849]: I1209 11:59:33.456165 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" event={"ID":"6393e5b3-a774-4273-8306-333ba2fb51ac","Type":"ContainerDied","Data":"49fe8320e7eba465439a3399baa6bb3be5c37d03a0ce1bf95a7a377f1b5c11f3"} Dec 09 11:59:34 crc kubenswrapper[4849]: I1209 11:59:34.892693 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" Dec 09 11:59:34 crc kubenswrapper[4849]: I1209 11:59:34.924447 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6393e5b3-a774-4273-8306-333ba2fb51ac-inventory-0\") pod \"6393e5b3-a774-4273-8306-333ba2fb51ac\" (UID: \"6393e5b3-a774-4273-8306-333ba2fb51ac\") " Dec 09 11:59:34 crc kubenswrapper[4849]: I1209 11:59:34.924605 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm67j\" (UniqueName: \"kubernetes.io/projected/6393e5b3-a774-4273-8306-333ba2fb51ac-kube-api-access-jm67j\") pod \"6393e5b3-a774-4273-8306-333ba2fb51ac\" (UID: \"6393e5b3-a774-4273-8306-333ba2fb51ac\") " Dec 09 11:59:34 crc kubenswrapper[4849]: I1209 11:59:34.924770 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6393e5b3-a774-4273-8306-333ba2fb51ac-ssh-key-openstack-edpm-ipam\") pod \"6393e5b3-a774-4273-8306-333ba2fb51ac\" (UID: \"6393e5b3-a774-4273-8306-333ba2fb51ac\") " Dec 09 11:59:34 crc kubenswrapper[4849]: I1209 11:59:34.930929 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6393e5b3-a774-4273-8306-333ba2fb51ac-kube-api-access-jm67j" (OuterVolumeSpecName: "kube-api-access-jm67j") pod "6393e5b3-a774-4273-8306-333ba2fb51ac" (UID: "6393e5b3-a774-4273-8306-333ba2fb51ac"). InnerVolumeSpecName "kube-api-access-jm67j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:59:34 crc kubenswrapper[4849]: I1209 11:59:34.957817 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6393e5b3-a774-4273-8306-333ba2fb51ac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6393e5b3-a774-4273-8306-333ba2fb51ac" (UID: "6393e5b3-a774-4273-8306-333ba2fb51ac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:59:34 crc kubenswrapper[4849]: I1209 11:59:34.986743 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6393e5b3-a774-4273-8306-333ba2fb51ac-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "6393e5b3-a774-4273-8306-333ba2fb51ac" (UID: "6393e5b3-a774-4273-8306-333ba2fb51ac"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.027394 4849 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6393e5b3-a774-4273-8306-333ba2fb51ac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.027450 4849 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6393e5b3-a774-4273-8306-333ba2fb51ac-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.027464 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm67j\" (UniqueName: \"kubernetes.io/projected/6393e5b3-a774-4273-8306-333ba2fb51ac-kube-api-access-jm67j\") on node \"crc\" DevicePath \"\"" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.475683 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" event={"ID":"6393e5b3-a774-4273-8306-333ba2fb51ac","Type":"ContainerDied","Data":"3301d6f696871cd7885e12d8f67ef827858b64939febdba76a0a337bbfd2e211"} Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.475729 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3301d6f696871cd7885e12d8f67ef827858b64939febdba76a0a337bbfd2e211" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.475755 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sdzhv" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.560388 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95"] Dec 09 11:59:35 crc kubenswrapper[4849]: E1209 11:59:35.560982 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6393e5b3-a774-4273-8306-333ba2fb51ac" containerName="ssh-known-hosts-edpm-deployment" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.561045 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="6393e5b3-a774-4273-8306-333ba2fb51ac" containerName="ssh-known-hosts-edpm-deployment" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.561262 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="6393e5b3-a774-4273-8306-333ba2fb51ac" containerName="ssh-known-hosts-edpm-deployment" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.561961 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.565100 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.566324 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j9nv" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.566596 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.566950 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.573128 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95"] Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.636546 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cd4e831-b4c0-4217-a69d-6927c605d0d5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f2l95\" (UID: \"4cd4e831-b4c0-4217-a69d-6927c605d0d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.636617 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cd4e831-b4c0-4217-a69d-6927c605d0d5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f2l95\" (UID: \"4cd4e831-b4c0-4217-a69d-6927c605d0d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.636744 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wnjm\" (UniqueName: \"kubernetes.io/projected/4cd4e831-b4c0-4217-a69d-6927c605d0d5-kube-api-access-9wnjm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f2l95\" (UID: \"4cd4e831-b4c0-4217-a69d-6927c605d0d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.738798 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cd4e831-b4c0-4217-a69d-6927c605d0d5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f2l95\" (UID: \"4cd4e831-b4c0-4217-a69d-6927c605d0d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.738869 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cd4e831-b4c0-4217-a69d-6927c605d0d5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f2l95\" (UID: \"4cd4e831-b4c0-4217-a69d-6927c605d0d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.738963 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wnjm\" (UniqueName: \"kubernetes.io/projected/4cd4e831-b4c0-4217-a69d-6927c605d0d5-kube-api-access-9wnjm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f2l95\" (UID: \"4cd4e831-b4c0-4217-a69d-6927c605d0d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.744438 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cd4e831-b4c0-4217-a69d-6927c605d0d5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f2l95\" (UID: \"4cd4e831-b4c0-4217-a69d-6927c605d0d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.745238 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cd4e831-b4c0-4217-a69d-6927c605d0d5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f2l95\" (UID: \"4cd4e831-b4c0-4217-a69d-6927c605d0d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.765342 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wnjm\" (UniqueName: \"kubernetes.io/projected/4cd4e831-b4c0-4217-a69d-6927c605d0d5-kube-api-access-9wnjm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f2l95\" (UID: \"4cd4e831-b4c0-4217-a69d-6927c605d0d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" Dec 09 11:59:35 crc kubenswrapper[4849]: I1209 11:59:35.883599 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" Dec 09 11:59:36 crc kubenswrapper[4849]: I1209 11:59:36.510258 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95"] Dec 09 11:59:37 crc kubenswrapper[4849]: I1209 11:59:37.498062 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" event={"ID":"4cd4e831-b4c0-4217-a69d-6927c605d0d5","Type":"ContainerStarted","Data":"95dceab63c3cb7332adcd71f5cbc47423d8491a14243ee65396c2055d848bc0b"} Dec 09 11:59:37 crc kubenswrapper[4849]: I1209 11:59:37.498614 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" event={"ID":"4cd4e831-b4c0-4217-a69d-6927c605d0d5","Type":"ContainerStarted","Data":"1119ca5922e9aa61617d5b55bf9ee609266ee44616dac27dd691a2b1c8266e49"} Dec 09 11:59:37 crc kubenswrapper[4849]: I1209 11:59:37.520090 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" podStartSLOduration=1.8376621549999999 podStartE2EDuration="2.520067997s" podCreationTimestamp="2025-12-09 11:59:35 +0000 UTC" firstStartedPulling="2025-12-09 11:59:36.524958875 +0000 UTC m=+1959.064843191" lastFinishedPulling="2025-12-09 11:59:37.207364717 +0000 UTC m=+1959.747249033" observedRunningTime="2025-12-09 11:59:37.512799949 +0000 UTC m=+1960.052684265" watchObservedRunningTime="2025-12-09 11:59:37.520067997 +0000 UTC m=+1960.059952303" Dec 09 11:59:47 crc kubenswrapper[4849]: I1209 11:59:47.628834 4849 generic.go:334] "Generic (PLEG): container finished" podID="4cd4e831-b4c0-4217-a69d-6927c605d0d5" containerID="95dceab63c3cb7332adcd71f5cbc47423d8491a14243ee65396c2055d848bc0b" exitCode=0 Dec 09 11:59:47 crc kubenswrapper[4849]: I1209 11:59:47.628939 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" event={"ID":"4cd4e831-b4c0-4217-a69d-6927c605d0d5","Type":"ContainerDied","Data":"95dceab63c3cb7332adcd71f5cbc47423d8491a14243ee65396c2055d848bc0b"} Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.109537 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.203056 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cd4e831-b4c0-4217-a69d-6927c605d0d5-ssh-key\") pod \"4cd4e831-b4c0-4217-a69d-6927c605d0d5\" (UID: \"4cd4e831-b4c0-4217-a69d-6927c605d0d5\") " Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.203198 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wnjm\" (UniqueName: \"kubernetes.io/projected/4cd4e831-b4c0-4217-a69d-6927c605d0d5-kube-api-access-9wnjm\") pod \"4cd4e831-b4c0-4217-a69d-6927c605d0d5\" (UID: \"4cd4e831-b4c0-4217-a69d-6927c605d0d5\") " Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.203278 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cd4e831-b4c0-4217-a69d-6927c605d0d5-inventory\") pod \"4cd4e831-b4c0-4217-a69d-6927c605d0d5\" (UID: \"4cd4e831-b4c0-4217-a69d-6927c605d0d5\") " Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.208896 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd4e831-b4c0-4217-a69d-6927c605d0d5-kube-api-access-9wnjm" (OuterVolumeSpecName: "kube-api-access-9wnjm") pod "4cd4e831-b4c0-4217-a69d-6927c605d0d5" (UID: "4cd4e831-b4c0-4217-a69d-6927c605d0d5"). InnerVolumeSpecName "kube-api-access-9wnjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.233282 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd4e831-b4c0-4217-a69d-6927c605d0d5-inventory" (OuterVolumeSpecName: "inventory") pod "4cd4e831-b4c0-4217-a69d-6927c605d0d5" (UID: "4cd4e831-b4c0-4217-a69d-6927c605d0d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.241594 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd4e831-b4c0-4217-a69d-6927c605d0d5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4cd4e831-b4c0-4217-a69d-6927c605d0d5" (UID: "4cd4e831-b4c0-4217-a69d-6927c605d0d5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.306476 4849 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cd4e831-b4c0-4217-a69d-6927c605d0d5-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.306566 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wnjm\" (UniqueName: \"kubernetes.io/projected/4cd4e831-b4c0-4217-a69d-6927c605d0d5-kube-api-access-9wnjm\") on node \"crc\" DevicePath \"\"" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.306588 4849 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cd4e831-b4c0-4217-a69d-6927c605d0d5-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.649201 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" event={"ID":"4cd4e831-b4c0-4217-a69d-6927c605d0d5","Type":"ContainerDied","Data":"1119ca5922e9aa61617d5b55bf9ee609266ee44616dac27dd691a2b1c8266e49"} Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.649275 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1119ca5922e9aa61617d5b55bf9ee609266ee44616dac27dd691a2b1c8266e49" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.649357 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f2l95" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.763517 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m"] Dec 09 11:59:49 crc kubenswrapper[4849]: E1209 11:59:49.764145 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd4e831-b4c0-4217-a69d-6927c605d0d5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.764191 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd4e831-b4c0-4217-a69d-6927c605d0d5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.764462 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd4e831-b4c0-4217-a69d-6927c605d0d5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.765219 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.775290 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.775386 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.775516 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7j9nv" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.775303 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.777920 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m"] Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.815934 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97c83fff-a07f-484a-8f87-d15c41ebb56a-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g597m\" (UID: \"97c83fff-a07f-484a-8f87-d15c41ebb56a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.816046 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ds4m\" (UniqueName: \"kubernetes.io/projected/97c83fff-a07f-484a-8f87-d15c41ebb56a-kube-api-access-6ds4m\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g597m\" (UID: \"97c83fff-a07f-484a-8f87-d15c41ebb56a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.816080 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97c83fff-a07f-484a-8f87-d15c41ebb56a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g597m\" (UID: \"97c83fff-a07f-484a-8f87-d15c41ebb56a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.917938 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97c83fff-a07f-484a-8f87-d15c41ebb56a-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g597m\" (UID: \"97c83fff-a07f-484a-8f87-d15c41ebb56a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.918039 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ds4m\" (UniqueName: \"kubernetes.io/projected/97c83fff-a07f-484a-8f87-d15c41ebb56a-kube-api-access-6ds4m\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g597m\" (UID: \"97c83fff-a07f-484a-8f87-d15c41ebb56a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.918070 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97c83fff-a07f-484a-8f87-d15c41ebb56a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g597m\" (UID: \"97c83fff-a07f-484a-8f87-d15c41ebb56a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.921966 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97c83fff-a07f-484a-8f87-d15c41ebb56a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g597m\" (UID: \"97c83fff-a07f-484a-8f87-d15c41ebb56a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.923315 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97c83fff-a07f-484a-8f87-d15c41ebb56a-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g597m\" (UID: \"97c83fff-a07f-484a-8f87-d15c41ebb56a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" Dec 09 11:59:49 crc kubenswrapper[4849]: I1209 11:59:49.937799 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ds4m\" (UniqueName: \"kubernetes.io/projected/97c83fff-a07f-484a-8f87-d15c41ebb56a-kube-api-access-6ds4m\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g597m\" (UID: \"97c83fff-a07f-484a-8f87-d15c41ebb56a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" Dec 09 11:59:50 crc kubenswrapper[4849]: I1209 11:59:50.088928 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" Dec 09 11:59:50 crc kubenswrapper[4849]: I1209 11:59:50.760828 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m"] Dec 09 11:59:51 crc kubenswrapper[4849]: I1209 11:59:51.132612 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 11:59:51 crc kubenswrapper[4849]: I1209 11:59:51.132671 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 11:59:51 crc kubenswrapper[4849]: I1209 11:59:51.668880 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" event={"ID":"97c83fff-a07f-484a-8f87-d15c41ebb56a","Type":"ContainerStarted","Data":"46c317335729406ac9ff53d3a2d1b54ef4797216bc1fb441e6a388713e0214b3"} Dec 09 11:59:51 crc kubenswrapper[4849]: I1209 11:59:51.669254 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" event={"ID":"97c83fff-a07f-484a-8f87-d15c41ebb56a","Type":"ContainerStarted","Data":"c2ae4a51e766ae2bb0df11d721ebf6218fdbcbcd31cd77b855b64cf9982a755c"} Dec 09 11:59:51 crc kubenswrapper[4849]: I1209 11:59:51.693377 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" podStartSLOduration=2.272862297 podStartE2EDuration="2.693354445s" podCreationTimestamp="2025-12-09 11:59:49 +0000 UTC" firstStartedPulling="2025-12-09 11:59:50.769834401 +0000 UTC m=+1973.309718717" lastFinishedPulling="2025-12-09 11:59:51.190326549 +0000 UTC m=+1973.730210865" observedRunningTime="2025-12-09 11:59:51.687776858 +0000 UTC m=+1974.227661174" watchObservedRunningTime="2025-12-09 11:59:51.693354445 +0000 UTC m=+1974.233238771" Dec 09 12:00:00 crc kubenswrapper[4849]: I1209 12:00:00.157039 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h"] Dec 09 12:00:00 crc kubenswrapper[4849]: I1209 12:00:00.159072 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" Dec 09 12:00:00 crc kubenswrapper[4849]: I1209 12:00:00.164914 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 12:00:00 crc kubenswrapper[4849]: I1209 12:00:00.164959 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 12:00:00 crc kubenswrapper[4849]: I1209 12:00:00.238379 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h"] Dec 09 12:00:00 crc kubenswrapper[4849]: I1209 12:00:00.321954 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmzlp\" (UniqueName: \"kubernetes.io/projected/560837ed-5d96-4516-9338-6c98d81bc0ae-kube-api-access-qmzlp\") pod \"collect-profiles-29421360-fpf2h\" (UID: \"560837ed-5d96-4516-9338-6c98d81bc0ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" Dec 09 12:00:00 crc kubenswrapper[4849]: I1209 12:00:00.322039 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/560837ed-5d96-4516-9338-6c98d81bc0ae-config-volume\") pod \"collect-profiles-29421360-fpf2h\" (UID: \"560837ed-5d96-4516-9338-6c98d81bc0ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" Dec 09 12:00:00 crc kubenswrapper[4849]: I1209 12:00:00.322123 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/560837ed-5d96-4516-9338-6c98d81bc0ae-secret-volume\") pod \"collect-profiles-29421360-fpf2h\" (UID: \"560837ed-5d96-4516-9338-6c98d81bc0ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" Dec 09 12:00:00 crc kubenswrapper[4849]: I1209 12:00:00.423951 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmzlp\" (UniqueName: \"kubernetes.io/projected/560837ed-5d96-4516-9338-6c98d81bc0ae-kube-api-access-qmzlp\") pod \"collect-profiles-29421360-fpf2h\" (UID: \"560837ed-5d96-4516-9338-6c98d81bc0ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" Dec 09 12:00:00 crc kubenswrapper[4849]: I1209 12:00:00.424077 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/560837ed-5d96-4516-9338-6c98d81bc0ae-config-volume\") pod \"collect-profiles-29421360-fpf2h\" (UID: \"560837ed-5d96-4516-9338-6c98d81bc0ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" Dec 09 12:00:00 crc kubenswrapper[4849]: I1209 12:00:00.424167 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/560837ed-5d96-4516-9338-6c98d81bc0ae-secret-volume\") pod \"collect-profiles-29421360-fpf2h\" (UID: \"560837ed-5d96-4516-9338-6c98d81bc0ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" Dec 09 12:00:00 crc kubenswrapper[4849]: I1209 12:00:00.426007 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/560837ed-5d96-4516-9338-6c98d81bc0ae-config-volume\") pod \"collect-profiles-29421360-fpf2h\" (UID: \"560837ed-5d96-4516-9338-6c98d81bc0ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" Dec 09 12:00:00 crc kubenswrapper[4849]: I1209 12:00:00.433518 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/560837ed-5d96-4516-9338-6c98d81bc0ae-secret-volume\") pod \"collect-profiles-29421360-fpf2h\" (UID: \"560837ed-5d96-4516-9338-6c98d81bc0ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" Dec 09 12:00:00 crc kubenswrapper[4849]: I1209 12:00:00.452067 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmzlp\" (UniqueName: \"kubernetes.io/projected/560837ed-5d96-4516-9338-6c98d81bc0ae-kube-api-access-qmzlp\") pod \"collect-profiles-29421360-fpf2h\" (UID: \"560837ed-5d96-4516-9338-6c98d81bc0ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" Dec 09 12:00:00 crc kubenswrapper[4849]: I1209 12:00:00.512264 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" Dec 09 12:00:01 crc kubenswrapper[4849]: I1209 12:00:00.995707 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h"] Dec 09 12:00:01 crc kubenswrapper[4849]: I1209 12:00:01.752552 4849 generic.go:334] "Generic (PLEG): container finished" podID="560837ed-5d96-4516-9338-6c98d81bc0ae" containerID="7c0a37d5cd5605951f1033f13038963ce87ab870667e7ac4ef5a7ae4478372a2" exitCode=0 Dec 09 12:00:01 crc kubenswrapper[4849]: I1209 12:00:01.752678 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" event={"ID":"560837ed-5d96-4516-9338-6c98d81bc0ae","Type":"ContainerDied","Data":"7c0a37d5cd5605951f1033f13038963ce87ab870667e7ac4ef5a7ae4478372a2"} Dec 09 12:00:01 crc kubenswrapper[4849]: I1209 12:00:01.753626 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" event={"ID":"560837ed-5d96-4516-9338-6c98d81bc0ae","Type":"ContainerStarted","Data":"3eafd4b510b29aee83876f0aabb4008e33e2be31b1b6a8d1bd72c11b43f83d91"} Dec 09 12:00:02 crc kubenswrapper[4849]: I1209 12:00:02.761808 4849 generic.go:334] "Generic (PLEG): container finished" podID="97c83fff-a07f-484a-8f87-d15c41ebb56a" containerID="46c317335729406ac9ff53d3a2d1b54ef4797216bc1fb441e6a388713e0214b3" exitCode=0 Dec 09 12:00:02 crc kubenswrapper[4849]: I1209 12:00:02.761979 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" event={"ID":"97c83fff-a07f-484a-8f87-d15c41ebb56a","Type":"ContainerDied","Data":"46c317335729406ac9ff53d3a2d1b54ef4797216bc1fb441e6a388713e0214b3"} Dec 09 12:00:03 crc kubenswrapper[4849]: I1209 12:00:03.072352 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" Dec 09 12:00:03 crc kubenswrapper[4849]: I1209 12:00:03.205714 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/560837ed-5d96-4516-9338-6c98d81bc0ae-config-volume\") pod \"560837ed-5d96-4516-9338-6c98d81bc0ae\" (UID: \"560837ed-5d96-4516-9338-6c98d81bc0ae\") " Dec 09 12:00:03 crc kubenswrapper[4849]: I1209 12:00:03.205866 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/560837ed-5d96-4516-9338-6c98d81bc0ae-secret-volume\") pod \"560837ed-5d96-4516-9338-6c98d81bc0ae\" (UID: \"560837ed-5d96-4516-9338-6c98d81bc0ae\") " Dec 09 12:00:03 crc kubenswrapper[4849]: I1209 12:00:03.205902 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmzlp\" (UniqueName: \"kubernetes.io/projected/560837ed-5d96-4516-9338-6c98d81bc0ae-kube-api-access-qmzlp\") pod \"560837ed-5d96-4516-9338-6c98d81bc0ae\" (UID: \"560837ed-5d96-4516-9338-6c98d81bc0ae\") " Dec 09 12:00:03 crc kubenswrapper[4849]: I1209 12:00:03.206770 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/560837ed-5d96-4516-9338-6c98d81bc0ae-config-volume" (OuterVolumeSpecName: "config-volume") pod "560837ed-5d96-4516-9338-6c98d81bc0ae" (UID: "560837ed-5d96-4516-9338-6c98d81bc0ae"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:00:03 crc kubenswrapper[4849]: I1209 12:00:03.207306 4849 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/560837ed-5d96-4516-9338-6c98d81bc0ae-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:00:03 crc kubenswrapper[4849]: I1209 12:00:03.211786 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/560837ed-5d96-4516-9338-6c98d81bc0ae-kube-api-access-qmzlp" (OuterVolumeSpecName: "kube-api-access-qmzlp") pod "560837ed-5d96-4516-9338-6c98d81bc0ae" (UID: "560837ed-5d96-4516-9338-6c98d81bc0ae"). InnerVolumeSpecName "kube-api-access-qmzlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:00:03 crc kubenswrapper[4849]: I1209 12:00:03.214583 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/560837ed-5d96-4516-9338-6c98d81bc0ae-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "560837ed-5d96-4516-9338-6c98d81bc0ae" (UID: "560837ed-5d96-4516-9338-6c98d81bc0ae"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:00:03 crc kubenswrapper[4849]: I1209 12:00:03.308315 4849 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/560837ed-5d96-4516-9338-6c98d81bc0ae-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:00:03 crc kubenswrapper[4849]: I1209 12:00:03.308350 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmzlp\" (UniqueName: \"kubernetes.io/projected/560837ed-5d96-4516-9338-6c98d81bc0ae-kube-api-access-qmzlp\") on node \"crc\" DevicePath \"\"" Dec 09 12:00:03 crc kubenswrapper[4849]: I1209 12:00:03.771115 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" Dec 09 12:00:03 crc kubenswrapper[4849]: I1209 12:00:03.771685 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-fpf2h" event={"ID":"560837ed-5d96-4516-9338-6c98d81bc0ae","Type":"ContainerDied","Data":"3eafd4b510b29aee83876f0aabb4008e33e2be31b1b6a8d1bd72c11b43f83d91"} Dec 09 12:00:03 crc kubenswrapper[4849]: I1209 12:00:03.771750 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eafd4b510b29aee83876f0aabb4008e33e2be31b1b6a8d1bd72c11b43f83d91" Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.323193 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.341035 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j"] Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.349145 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421315-g667j"] Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.436589 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97c83fff-a07f-484a-8f87-d15c41ebb56a-ssh-key\") pod \"97c83fff-a07f-484a-8f87-d15c41ebb56a\" (UID: \"97c83fff-a07f-484a-8f87-d15c41ebb56a\") " Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.436672 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97c83fff-a07f-484a-8f87-d15c41ebb56a-inventory\") pod \"97c83fff-a07f-484a-8f87-d15c41ebb56a\" (UID: \"97c83fff-a07f-484a-8f87-d15c41ebb56a\") " Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.436781 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ds4m\" (UniqueName: \"kubernetes.io/projected/97c83fff-a07f-484a-8f87-d15c41ebb56a-kube-api-access-6ds4m\") pod \"97c83fff-a07f-484a-8f87-d15c41ebb56a\" (UID: \"97c83fff-a07f-484a-8f87-d15c41ebb56a\") " Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.492648 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97c83fff-a07f-484a-8f87-d15c41ebb56a-inventory" (OuterVolumeSpecName: "inventory") pod "97c83fff-a07f-484a-8f87-d15c41ebb56a" (UID: "97c83fff-a07f-484a-8f87-d15c41ebb56a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.493689 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97c83fff-a07f-484a-8f87-d15c41ebb56a-kube-api-access-6ds4m" (OuterVolumeSpecName: "kube-api-access-6ds4m") pod "97c83fff-a07f-484a-8f87-d15c41ebb56a" (UID: "97c83fff-a07f-484a-8f87-d15c41ebb56a"). InnerVolumeSpecName "kube-api-access-6ds4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.498866 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97c83fff-a07f-484a-8f87-d15c41ebb56a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "97c83fff-a07f-484a-8f87-d15c41ebb56a" (UID: "97c83fff-a07f-484a-8f87-d15c41ebb56a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.539682 4849 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97c83fff-a07f-484a-8f87-d15c41ebb56a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.539717 4849 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97c83fff-a07f-484a-8f87-d15c41ebb56a-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.539736 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ds4m\" (UniqueName: \"kubernetes.io/projected/97c83fff-a07f-484a-8f87-d15c41ebb56a-kube-api-access-6ds4m\") on node \"crc\" DevicePath \"\"" Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.549749 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9eff9a-660a-450b-9c63-c473634e7d0a" path="/var/lib/kubelet/pods/8e9eff9a-660a-450b-9c63-c473634e7d0a/volumes" Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.789740 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" event={"ID":"97c83fff-a07f-484a-8f87-d15c41ebb56a","Type":"ContainerDied","Data":"c2ae4a51e766ae2bb0df11d721ebf6218fdbcbcd31cd77b855b64cf9982a755c"} Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.790108 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ae4a51e766ae2bb0df11d721ebf6218fdbcbcd31cd77b855b64cf9982a755c" Dec 09 12:00:04 crc kubenswrapper[4849]: I1209 12:00:04.789838 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g597m" Dec 09 12:00:06 crc kubenswrapper[4849]: I1209 12:00:06.959692 4849 scope.go:117] "RemoveContainer" containerID="f3eda02d24fc428570df77d00f42f74cfa8b8429a47212e4d92a16e1f17d15c7" Dec 09 12:00:07 crc kubenswrapper[4849]: I1209 12:00:07.025703 4849 scope.go:117] "RemoveContainer" containerID="2546e724fda2a396640e78ad94cd0ea55a32a8b524b627eaf64db6dc13ca49cb" Dec 09 12:00:07 crc kubenswrapper[4849]: I1209 12:00:07.048507 4849 scope.go:117] "RemoveContainer" containerID="5f61ede7d81f986af270ddec4f312688d608480c925b7b5923a61e2788a7c3c5" Dec 09 12:00:11 crc kubenswrapper[4849]: I1209 12:00:11.043174 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-xzqt9"] Dec 09 12:00:11 crc kubenswrapper[4849]: I1209 12:00:11.053612 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-xzqt9"] Dec 09 12:00:12 crc kubenswrapper[4849]: I1209 12:00:12.550981 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a2e163-5f9e-463e-baba-5dff706bbdd4" path="/var/lib/kubelet/pods/b8a2e163-5f9e-463e-baba-5dff706bbdd4/volumes" Dec 09 12:00:21 crc kubenswrapper[4849]: I1209 12:00:21.133339 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:00:21 crc kubenswrapper[4849]: I1209 12:00:21.134344 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:00:51 crc kubenswrapper[4849]: I1209 12:00:51.132526 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:00:51 crc kubenswrapper[4849]: I1209 12:00:51.133146 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:00:51 crc kubenswrapper[4849]: I1209 12:00:51.133198 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 12:00:51 crc kubenswrapper[4849]: I1209 12:00:51.133956 4849 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c886b97127ff3cfff7eb01a274c621a14da03725270ed7e7327b9be287540ec"} pod="openshift-machine-config-operator/machine-config-daemon-89kpx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:00:51 crc kubenswrapper[4849]: I1209 12:00:51.134023 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" containerID="cri-o://7c886b97127ff3cfff7eb01a274c621a14da03725270ed7e7327b9be287540ec" gracePeriod=600 Dec 09 12:00:51 crc kubenswrapper[4849]: I1209 12:00:51.312272 4849 generic.go:334] "Generic (PLEG): container finished" podID="157c6f6c-042b-4da3-934e-a08474e56486" containerID="7c886b97127ff3cfff7eb01a274c621a14da03725270ed7e7327b9be287540ec" exitCode=0 Dec 09 12:00:51 crc kubenswrapper[4849]: I1209 12:00:51.312327 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerDied","Data":"7c886b97127ff3cfff7eb01a274c621a14da03725270ed7e7327b9be287540ec"} Dec 09 12:00:51 crc kubenswrapper[4849]: I1209 12:00:51.312365 4849 scope.go:117] "RemoveContainer" containerID="49980e02a19e1e02f5aac62ff799d17e069a9174e69fb5bd9b4585d63e46a3f7" Dec 09 12:00:52 crc kubenswrapper[4849]: I1209 12:00:52.325558 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerStarted","Data":"264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209"} Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.191219 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29421361-j9tc6"] Dec 09 12:01:00 crc kubenswrapper[4849]: E1209 12:01:00.192274 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c83fff-a07f-484a-8f87-d15c41ebb56a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.192297 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c83fff-a07f-484a-8f87-d15c41ebb56a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 09 12:01:00 crc kubenswrapper[4849]: E1209 12:01:00.192317 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560837ed-5d96-4516-9338-6c98d81bc0ae" containerName="collect-profiles" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.192324 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="560837ed-5d96-4516-9338-6c98d81bc0ae" containerName="collect-profiles" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.192563 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="560837ed-5d96-4516-9338-6c98d81bc0ae" containerName="collect-profiles" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.192584 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="97c83fff-a07f-484a-8f87-d15c41ebb56a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.193370 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.207379 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29421361-j9tc6"] Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.371652 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-combined-ca-bundle\") pod \"keystone-cron-29421361-j9tc6\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.371703 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lvsf\" (UniqueName: \"kubernetes.io/projected/41f58e35-612c-46b5-a376-d59343443e53-kube-api-access-8lvsf\") pod \"keystone-cron-29421361-j9tc6\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.372069 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-fernet-keys\") pod \"keystone-cron-29421361-j9tc6\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.372259 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-config-data\") pod \"keystone-cron-29421361-j9tc6\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.474708 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-config-data\") pod \"keystone-cron-29421361-j9tc6\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.475271 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-combined-ca-bundle\") pod \"keystone-cron-29421361-j9tc6\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.475331 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lvsf\" (UniqueName: \"kubernetes.io/projected/41f58e35-612c-46b5-a376-d59343443e53-kube-api-access-8lvsf\") pod \"keystone-cron-29421361-j9tc6\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.475855 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-fernet-keys\") pod \"keystone-cron-29421361-j9tc6\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.485146 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-combined-ca-bundle\") pod \"keystone-cron-29421361-j9tc6\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.485170 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-fernet-keys\") pod \"keystone-cron-29421361-j9tc6\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.486910 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-config-data\") pod \"keystone-cron-29421361-j9tc6\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.507719 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lvsf\" (UniqueName: \"kubernetes.io/projected/41f58e35-612c-46b5-a376-d59343443e53-kube-api-access-8lvsf\") pod \"keystone-cron-29421361-j9tc6\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:00 crc kubenswrapper[4849]: I1209 12:01:00.528613 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:01 crc kubenswrapper[4849]: I1209 12:01:01.026925 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29421361-j9tc6"] Dec 09 12:01:01 crc kubenswrapper[4849]: I1209 12:01:01.410553 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421361-j9tc6" event={"ID":"41f58e35-612c-46b5-a376-d59343443e53","Type":"ContainerStarted","Data":"5104a4a63b3e85e6d570b0dafcc3ddfd573efbdf48856da85d37a5b72d925193"} Dec 09 12:01:01 crc kubenswrapper[4849]: I1209 12:01:01.410600 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421361-j9tc6" event={"ID":"41f58e35-612c-46b5-a376-d59343443e53","Type":"ContainerStarted","Data":"55da7a0828dbe626a107d62fe60d49a667902eff174713b14f5439173654afdc"} Dec 09 12:01:01 crc kubenswrapper[4849]: I1209 12:01:01.438196 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29421361-j9tc6" podStartSLOduration=1.438176406 podStartE2EDuration="1.438176406s" podCreationTimestamp="2025-12-09 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:01:01.432103917 +0000 UTC m=+2043.971988233" watchObservedRunningTime="2025-12-09 12:01:01.438176406 +0000 UTC m=+2043.978060722" Dec 09 12:01:04 crc kubenswrapper[4849]: I1209 12:01:04.442462 4849 generic.go:334] "Generic (PLEG): container finished" podID="41f58e35-612c-46b5-a376-d59343443e53" containerID="5104a4a63b3e85e6d570b0dafcc3ddfd573efbdf48856da85d37a5b72d925193" exitCode=0 Dec 09 12:01:04 crc kubenswrapper[4849]: I1209 12:01:04.444531 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421361-j9tc6" event={"ID":"41f58e35-612c-46b5-a376-d59343443e53","Type":"ContainerDied","Data":"5104a4a63b3e85e6d570b0dafcc3ddfd573efbdf48856da85d37a5b72d925193"} Dec 09 12:01:05 crc kubenswrapper[4849]: I1209 12:01:05.795617 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:05 crc kubenswrapper[4849]: I1209 12:01:05.971195 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lvsf\" (UniqueName: \"kubernetes.io/projected/41f58e35-612c-46b5-a376-d59343443e53-kube-api-access-8lvsf\") pod \"41f58e35-612c-46b5-a376-d59343443e53\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " Dec 09 12:01:05 crc kubenswrapper[4849]: I1209 12:01:05.971390 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-combined-ca-bundle\") pod \"41f58e35-612c-46b5-a376-d59343443e53\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " Dec 09 12:01:05 crc kubenswrapper[4849]: I1209 12:01:05.971721 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-fernet-keys\") pod \"41f58e35-612c-46b5-a376-d59343443e53\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " Dec 09 12:01:05 crc kubenswrapper[4849]: I1209 12:01:05.971848 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-config-data\") pod \"41f58e35-612c-46b5-a376-d59343443e53\" (UID: \"41f58e35-612c-46b5-a376-d59343443e53\") " Dec 09 12:01:05 crc kubenswrapper[4849]: I1209 12:01:05.993643 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f58e35-612c-46b5-a376-d59343443e53-kube-api-access-8lvsf" (OuterVolumeSpecName: "kube-api-access-8lvsf") pod "41f58e35-612c-46b5-a376-d59343443e53" (UID: "41f58e35-612c-46b5-a376-d59343443e53"). InnerVolumeSpecName "kube-api-access-8lvsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:01:05 crc kubenswrapper[4849]: I1209 12:01:05.993748 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "41f58e35-612c-46b5-a376-d59343443e53" (UID: "41f58e35-612c-46b5-a376-d59343443e53"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:01:06 crc kubenswrapper[4849]: I1209 12:01:06.009552 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41f58e35-612c-46b5-a376-d59343443e53" (UID: "41f58e35-612c-46b5-a376-d59343443e53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:01:06 crc kubenswrapper[4849]: I1209 12:01:06.045264 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-config-data" (OuterVolumeSpecName: "config-data") pod "41f58e35-612c-46b5-a376-d59343443e53" (UID: "41f58e35-612c-46b5-a376-d59343443e53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:01:06 crc kubenswrapper[4849]: I1209 12:01:06.078398 4849 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:01:06 crc kubenswrapper[4849]: I1209 12:01:06.078496 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lvsf\" (UniqueName: \"kubernetes.io/projected/41f58e35-612c-46b5-a376-d59343443e53-kube-api-access-8lvsf\") on node \"crc\" DevicePath \"\"" Dec 09 12:01:06 crc kubenswrapper[4849]: I1209 12:01:06.078520 4849 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:01:06 crc kubenswrapper[4849]: I1209 12:01:06.078531 4849 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/41f58e35-612c-46b5-a376-d59343443e53-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 12:01:06 crc kubenswrapper[4849]: I1209 12:01:06.469755 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421361-j9tc6" event={"ID":"41f58e35-612c-46b5-a376-d59343443e53","Type":"ContainerDied","Data":"55da7a0828dbe626a107d62fe60d49a667902eff174713b14f5439173654afdc"} Dec 09 12:01:06 crc kubenswrapper[4849]: I1209 12:01:06.469825 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55da7a0828dbe626a107d62fe60d49a667902eff174713b14f5439173654afdc" Dec 09 12:01:06 crc kubenswrapper[4849]: I1209 12:01:06.469904 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421361-j9tc6" Dec 09 12:01:07 crc kubenswrapper[4849]: I1209 12:01:07.177011 4849 scope.go:117] "RemoveContainer" containerID="0f76eb5fdaee1b2552caafe756b826f7dce88dcd9ffbd74006b132844821a07b" Dec 09 12:02:23 crc kubenswrapper[4849]: I1209 12:02:23.363986 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mqkjc"] Dec 09 12:02:23 crc kubenswrapper[4849]: E1209 12:02:23.365091 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f58e35-612c-46b5-a376-d59343443e53" containerName="keystone-cron" Dec 09 12:02:23 crc kubenswrapper[4849]: I1209 12:02:23.365108 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f58e35-612c-46b5-a376-d59343443e53" containerName="keystone-cron" Dec 09 12:02:23 crc kubenswrapper[4849]: I1209 12:02:23.365314 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f58e35-612c-46b5-a376-d59343443e53" containerName="keystone-cron" Dec 09 12:02:23 crc kubenswrapper[4849]: I1209 12:02:23.366546 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:23 crc kubenswrapper[4849]: I1209 12:02:23.376140 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mqkjc"] Dec 09 12:02:23 crc kubenswrapper[4849]: I1209 12:02:23.486384 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f24be54-298b-4bbb-8eaf-379780bd36e5-catalog-content\") pod \"redhat-operators-mqkjc\" (UID: \"0f24be54-298b-4bbb-8eaf-379780bd36e5\") " pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:23 crc kubenswrapper[4849]: I1209 12:02:23.486447 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f24be54-298b-4bbb-8eaf-379780bd36e5-utilities\") pod \"redhat-operators-mqkjc\" (UID: \"0f24be54-298b-4bbb-8eaf-379780bd36e5\") " pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:23 crc kubenswrapper[4849]: I1209 12:02:23.486492 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mvrp\" (UniqueName: \"kubernetes.io/projected/0f24be54-298b-4bbb-8eaf-379780bd36e5-kube-api-access-5mvrp\") pod \"redhat-operators-mqkjc\" (UID: \"0f24be54-298b-4bbb-8eaf-379780bd36e5\") " pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:23 crc kubenswrapper[4849]: I1209 12:02:23.588151 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mvrp\" (UniqueName: \"kubernetes.io/projected/0f24be54-298b-4bbb-8eaf-379780bd36e5-kube-api-access-5mvrp\") pod \"redhat-operators-mqkjc\" (UID: \"0f24be54-298b-4bbb-8eaf-379780bd36e5\") " pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:23 crc kubenswrapper[4849]: I1209 12:02:23.588314 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f24be54-298b-4bbb-8eaf-379780bd36e5-catalog-content\") pod \"redhat-operators-mqkjc\" (UID: \"0f24be54-298b-4bbb-8eaf-379780bd36e5\") " pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:23 crc kubenswrapper[4849]: I1209 12:02:23.588342 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f24be54-298b-4bbb-8eaf-379780bd36e5-utilities\") pod \"redhat-operators-mqkjc\" (UID: \"0f24be54-298b-4bbb-8eaf-379780bd36e5\") " pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:23 crc kubenswrapper[4849]: I1209 12:02:23.588867 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f24be54-298b-4bbb-8eaf-379780bd36e5-utilities\") pod \"redhat-operators-mqkjc\" (UID: \"0f24be54-298b-4bbb-8eaf-379780bd36e5\") " pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:23 crc kubenswrapper[4849]: I1209 12:02:23.589082 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f24be54-298b-4bbb-8eaf-379780bd36e5-catalog-content\") pod \"redhat-operators-mqkjc\" (UID: \"0f24be54-298b-4bbb-8eaf-379780bd36e5\") " pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:23 crc kubenswrapper[4849]: I1209 12:02:23.618297 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mvrp\" (UniqueName: \"kubernetes.io/projected/0f24be54-298b-4bbb-8eaf-379780bd36e5-kube-api-access-5mvrp\") pod \"redhat-operators-mqkjc\" (UID: \"0f24be54-298b-4bbb-8eaf-379780bd36e5\") " pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:23 crc kubenswrapper[4849]: I1209 12:02:23.717265 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:24 crc kubenswrapper[4849]: I1209 12:02:24.232704 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mqkjc"] Dec 09 12:02:24 crc kubenswrapper[4849]: W1209 12:02:24.248224 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f24be54_298b_4bbb_8eaf_379780bd36e5.slice/crio-d7777c2e859424cb430f5c6b9d8b16f9b0829d358ff0965f62e0c45b87516696 WatchSource:0}: Error finding container d7777c2e859424cb430f5c6b9d8b16f9b0829d358ff0965f62e0c45b87516696: Status 404 returned error can't find the container with id d7777c2e859424cb430f5c6b9d8b16f9b0829d358ff0965f62e0c45b87516696 Dec 09 12:02:25 crc kubenswrapper[4849]: I1209 12:02:25.181139 4849 generic.go:334] "Generic (PLEG): container finished" podID="0f24be54-298b-4bbb-8eaf-379780bd36e5" containerID="5eb2aad67200a2229739d797f2270e087c9ece94e4097dab46a14fda098d4649" exitCode=0 Dec 09 12:02:25 crc kubenswrapper[4849]: I1209 12:02:25.181204 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqkjc" event={"ID":"0f24be54-298b-4bbb-8eaf-379780bd36e5","Type":"ContainerDied","Data":"5eb2aad67200a2229739d797f2270e087c9ece94e4097dab46a14fda098d4649"} Dec 09 12:02:25 crc kubenswrapper[4849]: I1209 12:02:25.181481 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqkjc" event={"ID":"0f24be54-298b-4bbb-8eaf-379780bd36e5","Type":"ContainerStarted","Data":"d7777c2e859424cb430f5c6b9d8b16f9b0829d358ff0965f62e0c45b87516696"} Dec 09 12:02:25 crc kubenswrapper[4849]: I1209 12:02:25.184806 4849 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:02:26 crc kubenswrapper[4849]: I1209 12:02:26.192796 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqkjc" event={"ID":"0f24be54-298b-4bbb-8eaf-379780bd36e5","Type":"ContainerStarted","Data":"026ddacbc9c7969f7e8e2d1404fcf11c19a82f913990e7938d197a182703b125"} Dec 09 12:02:30 crc kubenswrapper[4849]: I1209 12:02:30.230361 4849 generic.go:334] "Generic (PLEG): container finished" podID="0f24be54-298b-4bbb-8eaf-379780bd36e5" containerID="026ddacbc9c7969f7e8e2d1404fcf11c19a82f913990e7938d197a182703b125" exitCode=0 Dec 09 12:02:30 crc kubenswrapper[4849]: I1209 12:02:30.230477 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqkjc" event={"ID":"0f24be54-298b-4bbb-8eaf-379780bd36e5","Type":"ContainerDied","Data":"026ddacbc9c7969f7e8e2d1404fcf11c19a82f913990e7938d197a182703b125"} Dec 09 12:02:31 crc kubenswrapper[4849]: I1209 12:02:31.240532 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqkjc" event={"ID":"0f24be54-298b-4bbb-8eaf-379780bd36e5","Type":"ContainerStarted","Data":"70e41206b0c61d140e8abb6227d41948b2038135e6a475120595be6671fc8fc7"} Dec 09 12:02:31 crc kubenswrapper[4849]: I1209 12:02:31.265679 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mqkjc" podStartSLOduration=2.816891658 podStartE2EDuration="8.265657799s" podCreationTimestamp="2025-12-09 12:02:23 +0000 UTC" firstStartedPulling="2025-12-09 12:02:25.184478526 +0000 UTC m=+2127.724362852" lastFinishedPulling="2025-12-09 12:02:30.633244667 +0000 UTC m=+2133.173128993" observedRunningTime="2025-12-09 12:02:31.260720988 +0000 UTC m=+2133.800605304" watchObservedRunningTime="2025-12-09 12:02:31.265657799 +0000 UTC m=+2133.805542115" Dec 09 12:02:33 crc kubenswrapper[4849]: I1209 12:02:33.718063 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:33 crc kubenswrapper[4849]: I1209 12:02:33.719862 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:34 crc kubenswrapper[4849]: I1209 12:02:34.766448 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mqkjc" podUID="0f24be54-298b-4bbb-8eaf-379780bd36e5" containerName="registry-server" probeResult="failure" output=< Dec 09 12:02:34 crc kubenswrapper[4849]: timeout: failed to connect service ":50051" within 1s Dec 09 12:02:34 crc kubenswrapper[4849]: > Dec 09 12:02:43 crc kubenswrapper[4849]: I1209 12:02:43.765852 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:43 crc kubenswrapper[4849]: I1209 12:02:43.820660 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:44 crc kubenswrapper[4849]: I1209 12:02:44.013486 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mqkjc"] Dec 09 12:02:45 crc kubenswrapper[4849]: I1209 12:02:45.374469 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mqkjc" podUID="0f24be54-298b-4bbb-8eaf-379780bd36e5" containerName="registry-server" containerID="cri-o://70e41206b0c61d140e8abb6227d41948b2038135e6a475120595be6671fc8fc7" gracePeriod=2 Dec 09 12:02:45 crc kubenswrapper[4849]: I1209 12:02:45.798487 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:45 crc kubenswrapper[4849]: I1209 12:02:45.925575 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f24be54-298b-4bbb-8eaf-379780bd36e5-catalog-content\") pod \"0f24be54-298b-4bbb-8eaf-379780bd36e5\" (UID: \"0f24be54-298b-4bbb-8eaf-379780bd36e5\") " Dec 09 12:02:45 crc kubenswrapper[4849]: I1209 12:02:45.925635 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mvrp\" (UniqueName: \"kubernetes.io/projected/0f24be54-298b-4bbb-8eaf-379780bd36e5-kube-api-access-5mvrp\") pod \"0f24be54-298b-4bbb-8eaf-379780bd36e5\" (UID: \"0f24be54-298b-4bbb-8eaf-379780bd36e5\") " Dec 09 12:02:45 crc kubenswrapper[4849]: I1209 12:02:45.925729 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f24be54-298b-4bbb-8eaf-379780bd36e5-utilities\") pod \"0f24be54-298b-4bbb-8eaf-379780bd36e5\" (UID: \"0f24be54-298b-4bbb-8eaf-379780bd36e5\") " Dec 09 12:02:45 crc kubenswrapper[4849]: I1209 12:02:45.926878 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f24be54-298b-4bbb-8eaf-379780bd36e5-utilities" (OuterVolumeSpecName: "utilities") pod "0f24be54-298b-4bbb-8eaf-379780bd36e5" (UID: "0f24be54-298b-4bbb-8eaf-379780bd36e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:02:45 crc kubenswrapper[4849]: I1209 12:02:45.930968 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f24be54-298b-4bbb-8eaf-379780bd36e5-kube-api-access-5mvrp" (OuterVolumeSpecName: "kube-api-access-5mvrp") pod "0f24be54-298b-4bbb-8eaf-379780bd36e5" (UID: "0f24be54-298b-4bbb-8eaf-379780bd36e5"). InnerVolumeSpecName "kube-api-access-5mvrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.027895 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mvrp\" (UniqueName: \"kubernetes.io/projected/0f24be54-298b-4bbb-8eaf-379780bd36e5-kube-api-access-5mvrp\") on node \"crc\" DevicePath \"\"" Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.027929 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f24be54-298b-4bbb-8eaf-379780bd36e5-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.041963 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f24be54-298b-4bbb-8eaf-379780bd36e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f24be54-298b-4bbb-8eaf-379780bd36e5" (UID: "0f24be54-298b-4bbb-8eaf-379780bd36e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.131827 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f24be54-298b-4bbb-8eaf-379780bd36e5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.392924 4849 generic.go:334] "Generic (PLEG): container finished" podID="0f24be54-298b-4bbb-8eaf-379780bd36e5" containerID="70e41206b0c61d140e8abb6227d41948b2038135e6a475120595be6671fc8fc7" exitCode=0 Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.392987 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqkjc" event={"ID":"0f24be54-298b-4bbb-8eaf-379780bd36e5","Type":"ContainerDied","Data":"70e41206b0c61d140e8abb6227d41948b2038135e6a475120595be6671fc8fc7"} Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.393026 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqkjc" event={"ID":"0f24be54-298b-4bbb-8eaf-379780bd36e5","Type":"ContainerDied","Data":"d7777c2e859424cb430f5c6b9d8b16f9b0829d358ff0965f62e0c45b87516696"} Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.393047 4849 scope.go:117] "RemoveContainer" containerID="70e41206b0c61d140e8abb6227d41948b2038135e6a475120595be6671fc8fc7" Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.393232 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqkjc" Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.426501 4849 scope.go:117] "RemoveContainer" containerID="026ddacbc9c7969f7e8e2d1404fcf11c19a82f913990e7938d197a182703b125" Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.448501 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mqkjc"] Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.455649 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mqkjc"] Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.460604 4849 scope.go:117] "RemoveContainer" containerID="5eb2aad67200a2229739d797f2270e087c9ece94e4097dab46a14fda098d4649" Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.486040 4849 scope.go:117] "RemoveContainer" containerID="70e41206b0c61d140e8abb6227d41948b2038135e6a475120595be6671fc8fc7" Dec 09 12:02:46 crc kubenswrapper[4849]: E1209 12:02:46.486590 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70e41206b0c61d140e8abb6227d41948b2038135e6a475120595be6671fc8fc7\": container with ID starting with 70e41206b0c61d140e8abb6227d41948b2038135e6a475120595be6671fc8fc7 not found: ID does not exist" containerID="70e41206b0c61d140e8abb6227d41948b2038135e6a475120595be6671fc8fc7" Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.486631 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70e41206b0c61d140e8abb6227d41948b2038135e6a475120595be6671fc8fc7"} err="failed to get container status \"70e41206b0c61d140e8abb6227d41948b2038135e6a475120595be6671fc8fc7\": rpc error: code = NotFound desc = could not find container \"70e41206b0c61d140e8abb6227d41948b2038135e6a475120595be6671fc8fc7\": container with ID starting with 70e41206b0c61d140e8abb6227d41948b2038135e6a475120595be6671fc8fc7 not found: ID does not exist" Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.486653 4849 scope.go:117] "RemoveContainer" containerID="026ddacbc9c7969f7e8e2d1404fcf11c19a82f913990e7938d197a182703b125" Dec 09 12:02:46 crc kubenswrapper[4849]: E1209 12:02:46.486877 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"026ddacbc9c7969f7e8e2d1404fcf11c19a82f913990e7938d197a182703b125\": container with ID starting with 026ddacbc9c7969f7e8e2d1404fcf11c19a82f913990e7938d197a182703b125 not found: ID does not exist" containerID="026ddacbc9c7969f7e8e2d1404fcf11c19a82f913990e7938d197a182703b125" Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.486906 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"026ddacbc9c7969f7e8e2d1404fcf11c19a82f913990e7938d197a182703b125"} err="failed to get container status \"026ddacbc9c7969f7e8e2d1404fcf11c19a82f913990e7938d197a182703b125\": rpc error: code = NotFound desc = could not find container \"026ddacbc9c7969f7e8e2d1404fcf11c19a82f913990e7938d197a182703b125\": container with ID starting with 026ddacbc9c7969f7e8e2d1404fcf11c19a82f913990e7938d197a182703b125 not found: ID does not exist" Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.486922 4849 scope.go:117] "RemoveContainer" containerID="5eb2aad67200a2229739d797f2270e087c9ece94e4097dab46a14fda098d4649" Dec 09 12:02:46 crc kubenswrapper[4849]: E1209 12:02:46.487179 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eb2aad67200a2229739d797f2270e087c9ece94e4097dab46a14fda098d4649\": container with ID starting with 5eb2aad67200a2229739d797f2270e087c9ece94e4097dab46a14fda098d4649 not found: ID does not exist" containerID="5eb2aad67200a2229739d797f2270e087c9ece94e4097dab46a14fda098d4649" Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.487205 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb2aad67200a2229739d797f2270e087c9ece94e4097dab46a14fda098d4649"} err="failed to get container status \"5eb2aad67200a2229739d797f2270e087c9ece94e4097dab46a14fda098d4649\": rpc error: code = NotFound desc = could not find container \"5eb2aad67200a2229739d797f2270e087c9ece94e4097dab46a14fda098d4649\": container with ID starting with 5eb2aad67200a2229739d797f2270e087c9ece94e4097dab46a14fda098d4649 not found: ID does not exist" Dec 09 12:02:46 crc kubenswrapper[4849]: I1209 12:02:46.548850 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f24be54-298b-4bbb-8eaf-379780bd36e5" path="/var/lib/kubelet/pods/0f24be54-298b-4bbb-8eaf-379780bd36e5/volumes" Dec 09 12:02:51 crc kubenswrapper[4849]: I1209 12:02:51.133123 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:02:51 crc kubenswrapper[4849]: I1209 12:02:51.133665 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.713966 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p6gn6"] Dec 09 12:03:10 crc kubenswrapper[4849]: E1209 12:03:10.715008 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f24be54-298b-4bbb-8eaf-379780bd36e5" containerName="extract-utilities" Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.715025 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f24be54-298b-4bbb-8eaf-379780bd36e5" containerName="extract-utilities" Dec 09 12:03:10 crc kubenswrapper[4849]: E1209 12:03:10.715049 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f24be54-298b-4bbb-8eaf-379780bd36e5" containerName="extract-content" Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.715058 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f24be54-298b-4bbb-8eaf-379780bd36e5" containerName="extract-content" Dec 09 12:03:10 crc kubenswrapper[4849]: E1209 12:03:10.715087 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f24be54-298b-4bbb-8eaf-379780bd36e5" containerName="registry-server" Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.715094 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f24be54-298b-4bbb-8eaf-379780bd36e5" containerName="registry-server" Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.715316 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f24be54-298b-4bbb-8eaf-379780bd36e5" containerName="registry-server" Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.717470 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.743535 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p6gn6"] Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.773917 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwmjx\" (UniqueName: \"kubernetes.io/projected/4dba2a1a-5fec-4044-99e8-d38908eaa219-kube-api-access-vwmjx\") pod \"certified-operators-p6gn6\" (UID: \"4dba2a1a-5fec-4044-99e8-d38908eaa219\") " pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.774048 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dba2a1a-5fec-4044-99e8-d38908eaa219-utilities\") pod \"certified-operators-p6gn6\" (UID: \"4dba2a1a-5fec-4044-99e8-d38908eaa219\") " pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.774148 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dba2a1a-5fec-4044-99e8-d38908eaa219-catalog-content\") pod \"certified-operators-p6gn6\" (UID: \"4dba2a1a-5fec-4044-99e8-d38908eaa219\") " pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.874741 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dba2a1a-5fec-4044-99e8-d38908eaa219-catalog-content\") pod \"certified-operators-p6gn6\" (UID: \"4dba2a1a-5fec-4044-99e8-d38908eaa219\") " pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.875050 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwmjx\" (UniqueName: \"kubernetes.io/projected/4dba2a1a-5fec-4044-99e8-d38908eaa219-kube-api-access-vwmjx\") pod \"certified-operators-p6gn6\" (UID: \"4dba2a1a-5fec-4044-99e8-d38908eaa219\") " pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.875128 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dba2a1a-5fec-4044-99e8-d38908eaa219-utilities\") pod \"certified-operators-p6gn6\" (UID: \"4dba2a1a-5fec-4044-99e8-d38908eaa219\") " pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.875318 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dba2a1a-5fec-4044-99e8-d38908eaa219-catalog-content\") pod \"certified-operators-p6gn6\" (UID: \"4dba2a1a-5fec-4044-99e8-d38908eaa219\") " pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.875537 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dba2a1a-5fec-4044-99e8-d38908eaa219-utilities\") pod \"certified-operators-p6gn6\" (UID: \"4dba2a1a-5fec-4044-99e8-d38908eaa219\") " pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:10 crc kubenswrapper[4849]: I1209 12:03:10.900084 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwmjx\" (UniqueName: \"kubernetes.io/projected/4dba2a1a-5fec-4044-99e8-d38908eaa219-kube-api-access-vwmjx\") pod \"certified-operators-p6gn6\" (UID: \"4dba2a1a-5fec-4044-99e8-d38908eaa219\") " pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:11 crc kubenswrapper[4849]: I1209 12:03:11.047055 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:11 crc kubenswrapper[4849]: I1209 12:03:11.329321 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vlcft"] Dec 09 12:03:11 crc kubenswrapper[4849]: I1209 12:03:11.331460 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:11 crc kubenswrapper[4849]: I1209 12:03:11.356356 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vlcft"] Dec 09 12:03:11 crc kubenswrapper[4849]: I1209 12:03:11.486575 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr5tj\" (UniqueName: \"kubernetes.io/projected/d573af37-1167-4a2f-87f9-be610a1e347c-kube-api-access-zr5tj\") pod \"community-operators-vlcft\" (UID: \"d573af37-1167-4a2f-87f9-be610a1e347c\") " pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:11 crc kubenswrapper[4849]: I1209 12:03:11.486989 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d573af37-1167-4a2f-87f9-be610a1e347c-catalog-content\") pod \"community-operators-vlcft\" (UID: \"d573af37-1167-4a2f-87f9-be610a1e347c\") " pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:11 crc kubenswrapper[4849]: I1209 12:03:11.487090 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d573af37-1167-4a2f-87f9-be610a1e347c-utilities\") pod \"community-operators-vlcft\" (UID: \"d573af37-1167-4a2f-87f9-be610a1e347c\") " pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:11 crc kubenswrapper[4849]: I1209 12:03:11.588828 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d573af37-1167-4a2f-87f9-be610a1e347c-utilities\") pod \"community-operators-vlcft\" (UID: \"d573af37-1167-4a2f-87f9-be610a1e347c\") " pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:11 crc kubenswrapper[4849]: I1209 12:03:11.588911 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr5tj\" (UniqueName: \"kubernetes.io/projected/d573af37-1167-4a2f-87f9-be610a1e347c-kube-api-access-zr5tj\") pod \"community-operators-vlcft\" (UID: \"d573af37-1167-4a2f-87f9-be610a1e347c\") " pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:11 crc kubenswrapper[4849]: I1209 12:03:11.589379 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d573af37-1167-4a2f-87f9-be610a1e347c-catalog-content\") pod \"community-operators-vlcft\" (UID: \"d573af37-1167-4a2f-87f9-be610a1e347c\") " pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:11 crc kubenswrapper[4849]: I1209 12:03:11.589464 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d573af37-1167-4a2f-87f9-be610a1e347c-utilities\") pod \"community-operators-vlcft\" (UID: \"d573af37-1167-4a2f-87f9-be610a1e347c\") " pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:11 crc kubenswrapper[4849]: I1209 12:03:11.589760 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d573af37-1167-4a2f-87f9-be610a1e347c-catalog-content\") pod \"community-operators-vlcft\" (UID: \"d573af37-1167-4a2f-87f9-be610a1e347c\") " pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:11 crc kubenswrapper[4849]: I1209 12:03:11.632705 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p6gn6"] Dec 09 12:03:11 crc kubenswrapper[4849]: I1209 12:03:11.639792 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr5tj\" (UniqueName: \"kubernetes.io/projected/d573af37-1167-4a2f-87f9-be610a1e347c-kube-api-access-zr5tj\") pod \"community-operators-vlcft\" (UID: \"d573af37-1167-4a2f-87f9-be610a1e347c\") " pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:11 crc kubenswrapper[4849]: I1209 12:03:11.661216 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:12 crc kubenswrapper[4849]: I1209 12:03:12.063868 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vlcft"] Dec 09 12:03:12 crc kubenswrapper[4849]: I1209 12:03:12.624610 4849 generic.go:334] "Generic (PLEG): container finished" podID="d573af37-1167-4a2f-87f9-be610a1e347c" containerID="feda48f0b9793da232935d4e10ea40b101b484a1b754490568f08e91fa21655e" exitCode=0 Dec 09 12:03:12 crc kubenswrapper[4849]: I1209 12:03:12.624662 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlcft" event={"ID":"d573af37-1167-4a2f-87f9-be610a1e347c","Type":"ContainerDied","Data":"feda48f0b9793da232935d4e10ea40b101b484a1b754490568f08e91fa21655e"} Dec 09 12:03:12 crc kubenswrapper[4849]: I1209 12:03:12.624980 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlcft" event={"ID":"d573af37-1167-4a2f-87f9-be610a1e347c","Type":"ContainerStarted","Data":"d97fc0922c2018b86ee5e4da37c106f913c7a536ff747ab7c7d9f55b48d4d866"} Dec 09 12:03:12 crc kubenswrapper[4849]: I1209 12:03:12.626967 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6gn6" event={"ID":"4dba2a1a-5fec-4044-99e8-d38908eaa219","Type":"ContainerDied","Data":"5ae965f22f5339ddd82a76fea03ed383f8e2460b12d838adfa05f77037916237"} Dec 09 12:03:12 crc kubenswrapper[4849]: I1209 12:03:12.626968 4849 generic.go:334] "Generic (PLEG): container finished" podID="4dba2a1a-5fec-4044-99e8-d38908eaa219" containerID="5ae965f22f5339ddd82a76fea03ed383f8e2460b12d838adfa05f77037916237" exitCode=0 Dec 09 12:03:12 crc kubenswrapper[4849]: I1209 12:03:12.627033 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6gn6" event={"ID":"4dba2a1a-5fec-4044-99e8-d38908eaa219","Type":"ContainerStarted","Data":"b602a98c9a5884a8bf6383a14a9d986115619a973420b77c3410edfc01293a7a"} Dec 09 12:03:13 crc kubenswrapper[4849]: I1209 12:03:13.638380 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6gn6" event={"ID":"4dba2a1a-5fec-4044-99e8-d38908eaa219","Type":"ContainerStarted","Data":"1a95c7060739c906cfbe63685a677fd2103abea797eb49379aa963b8adea6ec9"} Dec 09 12:03:13 crc kubenswrapper[4849]: I1209 12:03:13.641137 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlcft" event={"ID":"d573af37-1167-4a2f-87f9-be610a1e347c","Type":"ContainerStarted","Data":"da1ffc15f0789e9d026d748257222a1816ce8fd0dda3e951e3f803738ab70dd0"} Dec 09 12:03:15 crc kubenswrapper[4849]: I1209 12:03:15.658306 4849 generic.go:334] "Generic (PLEG): container finished" podID="d573af37-1167-4a2f-87f9-be610a1e347c" containerID="da1ffc15f0789e9d026d748257222a1816ce8fd0dda3e951e3f803738ab70dd0" exitCode=0 Dec 09 12:03:15 crc kubenswrapper[4849]: I1209 12:03:15.658395 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlcft" event={"ID":"d573af37-1167-4a2f-87f9-be610a1e347c","Type":"ContainerDied","Data":"da1ffc15f0789e9d026d748257222a1816ce8fd0dda3e951e3f803738ab70dd0"} Dec 09 12:03:15 crc kubenswrapper[4849]: I1209 12:03:15.662655 4849 generic.go:334] "Generic (PLEG): container finished" podID="4dba2a1a-5fec-4044-99e8-d38908eaa219" containerID="1a95c7060739c906cfbe63685a677fd2103abea797eb49379aa963b8adea6ec9" exitCode=0 Dec 09 12:03:15 crc kubenswrapper[4849]: I1209 12:03:15.662697 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6gn6" event={"ID":"4dba2a1a-5fec-4044-99e8-d38908eaa219","Type":"ContainerDied","Data":"1a95c7060739c906cfbe63685a677fd2103abea797eb49379aa963b8adea6ec9"} Dec 09 12:03:17 crc kubenswrapper[4849]: I1209 12:03:17.679320 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlcft" event={"ID":"d573af37-1167-4a2f-87f9-be610a1e347c","Type":"ContainerStarted","Data":"a309982eb8bf2c213048acc0de3b0e1d65e73cf06b1155fed7dea9163eebbe62"} Dec 09 12:03:17 crc kubenswrapper[4849]: I1209 12:03:17.683780 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6gn6" event={"ID":"4dba2a1a-5fec-4044-99e8-d38908eaa219","Type":"ContainerStarted","Data":"744bff247bd105608bdaf39637cd47640d8252be91442eca2e83f2b23c126253"} Dec 09 12:03:17 crc kubenswrapper[4849]: I1209 12:03:17.703291 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vlcft" podStartSLOduration=2.5281059900000002 podStartE2EDuration="6.703276359s" podCreationTimestamp="2025-12-09 12:03:11 +0000 UTC" firstStartedPulling="2025-12-09 12:03:12.62625881 +0000 UTC m=+2175.166143126" lastFinishedPulling="2025-12-09 12:03:16.801429179 +0000 UTC m=+2179.341313495" observedRunningTime="2025-12-09 12:03:17.699553097 +0000 UTC m=+2180.239437423" watchObservedRunningTime="2025-12-09 12:03:17.703276359 +0000 UTC m=+2180.243160675" Dec 09 12:03:17 crc kubenswrapper[4849]: I1209 12:03:17.736273 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p6gn6" podStartSLOduration=3.628148387 podStartE2EDuration="7.736251329s" podCreationTimestamp="2025-12-09 12:03:10 +0000 UTC" firstStartedPulling="2025-12-09 12:03:12.628490324 +0000 UTC m=+2175.168374640" lastFinishedPulling="2025-12-09 12:03:16.736593276 +0000 UTC m=+2179.276477582" observedRunningTime="2025-12-09 12:03:17.726466298 +0000 UTC m=+2180.266350614" watchObservedRunningTime="2025-12-09 12:03:17.736251329 +0000 UTC m=+2180.276135655" Dec 09 12:03:21 crc kubenswrapper[4849]: I1209 12:03:21.048227 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:21 crc kubenswrapper[4849]: I1209 12:03:21.048952 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:21 crc kubenswrapper[4849]: I1209 12:03:21.090332 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:21 crc kubenswrapper[4849]: I1209 12:03:21.132996 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:03:21 crc kubenswrapper[4849]: I1209 12:03:21.133055 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:03:21 crc kubenswrapper[4849]: I1209 12:03:21.661909 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:21 crc kubenswrapper[4849]: I1209 12:03:21.661958 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:21 crc kubenswrapper[4849]: I1209 12:03:21.719406 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:21 crc kubenswrapper[4849]: I1209 12:03:21.767089 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:21 crc kubenswrapper[4849]: I1209 12:03:21.769220 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:22 crc kubenswrapper[4849]: I1209 12:03:22.704345 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p6gn6"] Dec 09 12:03:23 crc kubenswrapper[4849]: I1209 12:03:23.729363 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p6gn6" podUID="4dba2a1a-5fec-4044-99e8-d38908eaa219" containerName="registry-server" containerID="cri-o://744bff247bd105608bdaf39637cd47640d8252be91442eca2e83f2b23c126253" gracePeriod=2 Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.099707 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vlcft"] Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.100163 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vlcft" podUID="d573af37-1167-4a2f-87f9-be610a1e347c" containerName="registry-server" containerID="cri-o://a309982eb8bf2c213048acc0de3b0e1d65e73cf06b1155fed7dea9163eebbe62" gracePeriod=2 Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.277154 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.440217 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dba2a1a-5fec-4044-99e8-d38908eaa219-utilities\") pod \"4dba2a1a-5fec-4044-99e8-d38908eaa219\" (UID: \"4dba2a1a-5fec-4044-99e8-d38908eaa219\") " Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.440358 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwmjx\" (UniqueName: \"kubernetes.io/projected/4dba2a1a-5fec-4044-99e8-d38908eaa219-kube-api-access-vwmjx\") pod \"4dba2a1a-5fec-4044-99e8-d38908eaa219\" (UID: \"4dba2a1a-5fec-4044-99e8-d38908eaa219\") " Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.440514 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dba2a1a-5fec-4044-99e8-d38908eaa219-catalog-content\") pod \"4dba2a1a-5fec-4044-99e8-d38908eaa219\" (UID: \"4dba2a1a-5fec-4044-99e8-d38908eaa219\") " Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.442937 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dba2a1a-5fec-4044-99e8-d38908eaa219-utilities" (OuterVolumeSpecName: "utilities") pod "4dba2a1a-5fec-4044-99e8-d38908eaa219" (UID: "4dba2a1a-5fec-4044-99e8-d38908eaa219"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.451044 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dba2a1a-5fec-4044-99e8-d38908eaa219-kube-api-access-vwmjx" (OuterVolumeSpecName: "kube-api-access-vwmjx") pod "4dba2a1a-5fec-4044-99e8-d38908eaa219" (UID: "4dba2a1a-5fec-4044-99e8-d38908eaa219"). InnerVolumeSpecName "kube-api-access-vwmjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.525385 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dba2a1a-5fec-4044-99e8-d38908eaa219-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4dba2a1a-5fec-4044-99e8-d38908eaa219" (UID: "4dba2a1a-5fec-4044-99e8-d38908eaa219"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.542824 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dba2a1a-5fec-4044-99e8-d38908eaa219-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.542856 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dba2a1a-5fec-4044-99e8-d38908eaa219-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.542870 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwmjx\" (UniqueName: \"kubernetes.io/projected/4dba2a1a-5fec-4044-99e8-d38908eaa219-kube-api-access-vwmjx\") on node \"crc\" DevicePath \"\"" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.602777 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.740566 4849 generic.go:334] "Generic (PLEG): container finished" podID="4dba2a1a-5fec-4044-99e8-d38908eaa219" containerID="744bff247bd105608bdaf39637cd47640d8252be91442eca2e83f2b23c126253" exitCode=0 Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.740632 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6gn6" event={"ID":"4dba2a1a-5fec-4044-99e8-d38908eaa219","Type":"ContainerDied","Data":"744bff247bd105608bdaf39637cd47640d8252be91442eca2e83f2b23c126253"} Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.740665 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6gn6" event={"ID":"4dba2a1a-5fec-4044-99e8-d38908eaa219","Type":"ContainerDied","Data":"b602a98c9a5884a8bf6383a14a9d986115619a973420b77c3410edfc01293a7a"} Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.740687 4849 scope.go:117] "RemoveContainer" containerID="744bff247bd105608bdaf39637cd47640d8252be91442eca2e83f2b23c126253" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.741603 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6gn6" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.743155 4849 generic.go:334] "Generic (PLEG): container finished" podID="d573af37-1167-4a2f-87f9-be610a1e347c" containerID="a309982eb8bf2c213048acc0de3b0e1d65e73cf06b1155fed7dea9163eebbe62" exitCode=0 Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.743183 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlcft" event={"ID":"d573af37-1167-4a2f-87f9-be610a1e347c","Type":"ContainerDied","Data":"a309982eb8bf2c213048acc0de3b0e1d65e73cf06b1155fed7dea9163eebbe62"} Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.743204 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlcft" event={"ID":"d573af37-1167-4a2f-87f9-be610a1e347c","Type":"ContainerDied","Data":"d97fc0922c2018b86ee5e4da37c106f913c7a536ff747ab7c7d9f55b48d4d866"} Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.743186 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlcft" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.745394 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d573af37-1167-4a2f-87f9-be610a1e347c-catalog-content\") pod \"d573af37-1167-4a2f-87f9-be610a1e347c\" (UID: \"d573af37-1167-4a2f-87f9-be610a1e347c\") " Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.745460 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d573af37-1167-4a2f-87f9-be610a1e347c-utilities\") pod \"d573af37-1167-4a2f-87f9-be610a1e347c\" (UID: \"d573af37-1167-4a2f-87f9-be610a1e347c\") " Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.745761 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr5tj\" (UniqueName: \"kubernetes.io/projected/d573af37-1167-4a2f-87f9-be610a1e347c-kube-api-access-zr5tj\") pod \"d573af37-1167-4a2f-87f9-be610a1e347c\" (UID: \"d573af37-1167-4a2f-87f9-be610a1e347c\") " Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.746496 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d573af37-1167-4a2f-87f9-be610a1e347c-utilities" (OuterVolumeSpecName: "utilities") pod "d573af37-1167-4a2f-87f9-be610a1e347c" (UID: "d573af37-1167-4a2f-87f9-be610a1e347c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.750337 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d573af37-1167-4a2f-87f9-be610a1e347c-kube-api-access-zr5tj" (OuterVolumeSpecName: "kube-api-access-zr5tj") pod "d573af37-1167-4a2f-87f9-be610a1e347c" (UID: "d573af37-1167-4a2f-87f9-be610a1e347c"). InnerVolumeSpecName "kube-api-access-zr5tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.766525 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p6gn6"] Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.768988 4849 scope.go:117] "RemoveContainer" containerID="1a95c7060739c906cfbe63685a677fd2103abea797eb49379aa963b8adea6ec9" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.779802 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p6gn6"] Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.796848 4849 scope.go:117] "RemoveContainer" containerID="5ae965f22f5339ddd82a76fea03ed383f8e2460b12d838adfa05f77037916237" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.811060 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d573af37-1167-4a2f-87f9-be610a1e347c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d573af37-1167-4a2f-87f9-be610a1e347c" (UID: "d573af37-1167-4a2f-87f9-be610a1e347c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.814735 4849 scope.go:117] "RemoveContainer" containerID="744bff247bd105608bdaf39637cd47640d8252be91442eca2e83f2b23c126253" Dec 09 12:03:24 crc kubenswrapper[4849]: E1209 12:03:24.815102 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"744bff247bd105608bdaf39637cd47640d8252be91442eca2e83f2b23c126253\": container with ID starting with 744bff247bd105608bdaf39637cd47640d8252be91442eca2e83f2b23c126253 not found: ID does not exist" containerID="744bff247bd105608bdaf39637cd47640d8252be91442eca2e83f2b23c126253" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.815134 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"744bff247bd105608bdaf39637cd47640d8252be91442eca2e83f2b23c126253"} err="failed to get container status \"744bff247bd105608bdaf39637cd47640d8252be91442eca2e83f2b23c126253\": rpc error: code = NotFound desc = could not find container \"744bff247bd105608bdaf39637cd47640d8252be91442eca2e83f2b23c126253\": container with ID starting with 744bff247bd105608bdaf39637cd47640d8252be91442eca2e83f2b23c126253 not found: ID does not exist" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.815154 4849 scope.go:117] "RemoveContainer" containerID="1a95c7060739c906cfbe63685a677fd2103abea797eb49379aa963b8adea6ec9" Dec 09 12:03:24 crc kubenswrapper[4849]: E1209 12:03:24.815364 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a95c7060739c906cfbe63685a677fd2103abea797eb49379aa963b8adea6ec9\": container with ID starting with 1a95c7060739c906cfbe63685a677fd2103abea797eb49379aa963b8adea6ec9 not found: ID does not exist" containerID="1a95c7060739c906cfbe63685a677fd2103abea797eb49379aa963b8adea6ec9" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.815385 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a95c7060739c906cfbe63685a677fd2103abea797eb49379aa963b8adea6ec9"} err="failed to get container status \"1a95c7060739c906cfbe63685a677fd2103abea797eb49379aa963b8adea6ec9\": rpc error: code = NotFound desc = could not find container \"1a95c7060739c906cfbe63685a677fd2103abea797eb49379aa963b8adea6ec9\": container with ID starting with 1a95c7060739c906cfbe63685a677fd2103abea797eb49379aa963b8adea6ec9 not found: ID does not exist" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.815398 4849 scope.go:117] "RemoveContainer" containerID="5ae965f22f5339ddd82a76fea03ed383f8e2460b12d838adfa05f77037916237" Dec 09 12:03:24 crc kubenswrapper[4849]: E1209 12:03:24.815667 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae965f22f5339ddd82a76fea03ed383f8e2460b12d838adfa05f77037916237\": container with ID starting with 5ae965f22f5339ddd82a76fea03ed383f8e2460b12d838adfa05f77037916237 not found: ID does not exist" containerID="5ae965f22f5339ddd82a76fea03ed383f8e2460b12d838adfa05f77037916237" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.815687 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae965f22f5339ddd82a76fea03ed383f8e2460b12d838adfa05f77037916237"} err="failed to get container status \"5ae965f22f5339ddd82a76fea03ed383f8e2460b12d838adfa05f77037916237\": rpc error: code = NotFound desc = could not find container \"5ae965f22f5339ddd82a76fea03ed383f8e2460b12d838adfa05f77037916237\": container with ID starting with 5ae965f22f5339ddd82a76fea03ed383f8e2460b12d838adfa05f77037916237 not found: ID does not exist" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.815699 4849 scope.go:117] "RemoveContainer" containerID="a309982eb8bf2c213048acc0de3b0e1d65e73cf06b1155fed7dea9163eebbe62" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.843050 4849 scope.go:117] "RemoveContainer" containerID="da1ffc15f0789e9d026d748257222a1816ce8fd0dda3e951e3f803738ab70dd0" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.847864 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr5tj\" (UniqueName: \"kubernetes.io/projected/d573af37-1167-4a2f-87f9-be610a1e347c-kube-api-access-zr5tj\") on node \"crc\" DevicePath \"\"" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.848007 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d573af37-1167-4a2f-87f9-be610a1e347c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.848071 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d573af37-1167-4a2f-87f9-be610a1e347c-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.898950 4849 scope.go:117] "RemoveContainer" containerID="feda48f0b9793da232935d4e10ea40b101b484a1b754490568f08e91fa21655e" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.943285 4849 scope.go:117] "RemoveContainer" containerID="a309982eb8bf2c213048acc0de3b0e1d65e73cf06b1155fed7dea9163eebbe62" Dec 09 12:03:24 crc kubenswrapper[4849]: E1209 12:03:24.944040 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a309982eb8bf2c213048acc0de3b0e1d65e73cf06b1155fed7dea9163eebbe62\": container with ID starting with a309982eb8bf2c213048acc0de3b0e1d65e73cf06b1155fed7dea9163eebbe62 not found: ID does not exist" containerID="a309982eb8bf2c213048acc0de3b0e1d65e73cf06b1155fed7dea9163eebbe62" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.944073 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a309982eb8bf2c213048acc0de3b0e1d65e73cf06b1155fed7dea9163eebbe62"} err="failed to get container status \"a309982eb8bf2c213048acc0de3b0e1d65e73cf06b1155fed7dea9163eebbe62\": rpc error: code = NotFound desc = could not find container \"a309982eb8bf2c213048acc0de3b0e1d65e73cf06b1155fed7dea9163eebbe62\": container with ID starting with a309982eb8bf2c213048acc0de3b0e1d65e73cf06b1155fed7dea9163eebbe62 not found: ID does not exist" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.944100 4849 scope.go:117] "RemoveContainer" containerID="da1ffc15f0789e9d026d748257222a1816ce8fd0dda3e951e3f803738ab70dd0" Dec 09 12:03:24 crc kubenswrapper[4849]: E1209 12:03:24.944592 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da1ffc15f0789e9d026d748257222a1816ce8fd0dda3e951e3f803738ab70dd0\": container with ID starting with da1ffc15f0789e9d026d748257222a1816ce8fd0dda3e951e3f803738ab70dd0 not found: ID does not exist" containerID="da1ffc15f0789e9d026d748257222a1816ce8fd0dda3e951e3f803738ab70dd0" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.944638 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da1ffc15f0789e9d026d748257222a1816ce8fd0dda3e951e3f803738ab70dd0"} err="failed to get container status \"da1ffc15f0789e9d026d748257222a1816ce8fd0dda3e951e3f803738ab70dd0\": rpc error: code = NotFound desc = could not find container \"da1ffc15f0789e9d026d748257222a1816ce8fd0dda3e951e3f803738ab70dd0\": container with ID starting with da1ffc15f0789e9d026d748257222a1816ce8fd0dda3e951e3f803738ab70dd0 not found: ID does not exist" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.944654 4849 scope.go:117] "RemoveContainer" containerID="feda48f0b9793da232935d4e10ea40b101b484a1b754490568f08e91fa21655e" Dec 09 12:03:24 crc kubenswrapper[4849]: E1209 12:03:24.945145 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feda48f0b9793da232935d4e10ea40b101b484a1b754490568f08e91fa21655e\": container with ID starting with feda48f0b9793da232935d4e10ea40b101b484a1b754490568f08e91fa21655e not found: ID does not exist" containerID="feda48f0b9793da232935d4e10ea40b101b484a1b754490568f08e91fa21655e" Dec 09 12:03:24 crc kubenswrapper[4849]: I1209 12:03:24.945197 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feda48f0b9793da232935d4e10ea40b101b484a1b754490568f08e91fa21655e"} err="failed to get container status \"feda48f0b9793da232935d4e10ea40b101b484a1b754490568f08e91fa21655e\": rpc error: code = NotFound desc = could not find container \"feda48f0b9793da232935d4e10ea40b101b484a1b754490568f08e91fa21655e\": container with ID starting with feda48f0b9793da232935d4e10ea40b101b484a1b754490568f08e91fa21655e not found: ID does not exist" Dec 09 12:03:25 crc kubenswrapper[4849]: I1209 12:03:25.088230 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vlcft"] Dec 09 12:03:25 crc kubenswrapper[4849]: I1209 12:03:25.103957 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vlcft"] Dec 09 12:03:26 crc kubenswrapper[4849]: I1209 12:03:26.547693 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dba2a1a-5fec-4044-99e8-d38908eaa219" path="/var/lib/kubelet/pods/4dba2a1a-5fec-4044-99e8-d38908eaa219/volumes" Dec 09 12:03:26 crc kubenswrapper[4849]: I1209 12:03:26.548772 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d573af37-1167-4a2f-87f9-be610a1e347c" path="/var/lib/kubelet/pods/d573af37-1167-4a2f-87f9-be610a1e347c/volumes" Dec 09 12:03:51 crc kubenswrapper[4849]: I1209 12:03:51.132958 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:03:51 crc kubenswrapper[4849]: I1209 12:03:51.133548 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:03:51 crc kubenswrapper[4849]: I1209 12:03:51.133604 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 12:03:51 crc kubenswrapper[4849]: I1209 12:03:51.134387 4849 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209"} pod="openshift-machine-config-operator/machine-config-daemon-89kpx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:03:51 crc kubenswrapper[4849]: I1209 12:03:51.134457 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" containerID="cri-o://264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" gracePeriod=600 Dec 09 12:03:51 crc kubenswrapper[4849]: E1209 12:03:51.259578 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:03:52 crc kubenswrapper[4849]: I1209 12:03:52.018503 4849 generic.go:334] "Generic (PLEG): container finished" podID="157c6f6c-042b-4da3-934e-a08474e56486" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" exitCode=0 Dec 09 12:03:52 crc kubenswrapper[4849]: I1209 12:03:52.018556 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerDied","Data":"264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209"} Dec 09 12:03:52 crc kubenswrapper[4849]: I1209 12:03:52.018622 4849 scope.go:117] "RemoveContainer" containerID="7c886b97127ff3cfff7eb01a274c621a14da03725270ed7e7327b9be287540ec" Dec 09 12:03:52 crc kubenswrapper[4849]: I1209 12:03:52.019349 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:03:52 crc kubenswrapper[4849]: E1209 12:03:52.020056 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:04:02 crc kubenswrapper[4849]: I1209 12:04:02.537461 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:04:02 crc kubenswrapper[4849]: E1209 12:04:02.538278 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:04:16 crc kubenswrapper[4849]: I1209 12:04:16.536741 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:04:16 crc kubenswrapper[4849]: E1209 12:04:16.538536 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:04:28 crc kubenswrapper[4849]: I1209 12:04:28.548170 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:04:28 crc kubenswrapper[4849]: E1209 12:04:28.549045 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:04:41 crc kubenswrapper[4849]: I1209 12:04:41.537024 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:04:41 crc kubenswrapper[4849]: E1209 12:04:41.539355 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:04:53 crc kubenswrapper[4849]: I1209 12:04:53.535945 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:04:53 crc kubenswrapper[4849]: E1209 12:04:53.536752 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:05:08 crc kubenswrapper[4849]: I1209 12:05:08.542305 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:05:08 crc kubenswrapper[4849]: E1209 12:05:08.543305 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:05:20 crc kubenswrapper[4849]: I1209 12:05:20.536461 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:05:20 crc kubenswrapper[4849]: E1209 12:05:20.537218 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.572148 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-znxg4/must-gather-mbhmx"] Dec 09 12:05:31 crc kubenswrapper[4849]: E1209 12:05:31.573079 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d573af37-1167-4a2f-87f9-be610a1e347c" containerName="registry-server" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.573097 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="d573af37-1167-4a2f-87f9-be610a1e347c" containerName="registry-server" Dec 09 12:05:31 crc kubenswrapper[4849]: E1209 12:05:31.573121 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dba2a1a-5fec-4044-99e8-d38908eaa219" containerName="registry-server" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.573127 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dba2a1a-5fec-4044-99e8-d38908eaa219" containerName="registry-server" Dec 09 12:05:31 crc kubenswrapper[4849]: E1209 12:05:31.573151 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d573af37-1167-4a2f-87f9-be610a1e347c" containerName="extract-content" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.573158 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="d573af37-1167-4a2f-87f9-be610a1e347c" containerName="extract-content" Dec 09 12:05:31 crc kubenswrapper[4849]: E1209 12:05:31.573171 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d573af37-1167-4a2f-87f9-be610a1e347c" containerName="extract-utilities" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.573177 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="d573af37-1167-4a2f-87f9-be610a1e347c" containerName="extract-utilities" Dec 09 12:05:31 crc kubenswrapper[4849]: E1209 12:05:31.573192 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dba2a1a-5fec-4044-99e8-d38908eaa219" containerName="extract-utilities" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.573198 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dba2a1a-5fec-4044-99e8-d38908eaa219" containerName="extract-utilities" Dec 09 12:05:31 crc kubenswrapper[4849]: E1209 12:05:31.573204 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dba2a1a-5fec-4044-99e8-d38908eaa219" containerName="extract-content" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.573211 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dba2a1a-5fec-4044-99e8-d38908eaa219" containerName="extract-content" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.573364 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dba2a1a-5fec-4044-99e8-d38908eaa219" containerName="registry-server" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.573382 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="d573af37-1167-4a2f-87f9-be610a1e347c" containerName="registry-server" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.574564 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-znxg4/must-gather-mbhmx" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.578028 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-znxg4"/"openshift-service-ca.crt" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.596609 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-znxg4"/"kube-root-ca.crt" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.600300 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-znxg4/must-gather-mbhmx"] Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.675917 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8988f826-1349-4c00-9ac1-9540bb868c89-must-gather-output\") pod \"must-gather-mbhmx\" (UID: \"8988f826-1349-4c00-9ac1-9540bb868c89\") " pod="openshift-must-gather-znxg4/must-gather-mbhmx" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.676259 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbpp8\" (UniqueName: \"kubernetes.io/projected/8988f826-1349-4c00-9ac1-9540bb868c89-kube-api-access-kbpp8\") pod \"must-gather-mbhmx\" (UID: \"8988f826-1349-4c00-9ac1-9540bb868c89\") " pod="openshift-must-gather-znxg4/must-gather-mbhmx" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.777581 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbpp8\" (UniqueName: \"kubernetes.io/projected/8988f826-1349-4c00-9ac1-9540bb868c89-kube-api-access-kbpp8\") pod \"must-gather-mbhmx\" (UID: \"8988f826-1349-4c00-9ac1-9540bb868c89\") " pod="openshift-must-gather-znxg4/must-gather-mbhmx" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.777865 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8988f826-1349-4c00-9ac1-9540bb868c89-must-gather-output\") pod \"must-gather-mbhmx\" (UID: \"8988f826-1349-4c00-9ac1-9540bb868c89\") " pod="openshift-must-gather-znxg4/must-gather-mbhmx" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.778331 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8988f826-1349-4c00-9ac1-9540bb868c89-must-gather-output\") pod \"must-gather-mbhmx\" (UID: \"8988f826-1349-4c00-9ac1-9540bb868c89\") " pod="openshift-must-gather-znxg4/must-gather-mbhmx" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.802115 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbpp8\" (UniqueName: \"kubernetes.io/projected/8988f826-1349-4c00-9ac1-9540bb868c89-kube-api-access-kbpp8\") pod \"must-gather-mbhmx\" (UID: \"8988f826-1349-4c00-9ac1-9540bb868c89\") " pod="openshift-must-gather-znxg4/must-gather-mbhmx" Dec 09 12:05:31 crc kubenswrapper[4849]: I1209 12:05:31.895999 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-znxg4/must-gather-mbhmx" Dec 09 12:05:32 crc kubenswrapper[4849]: W1209 12:05:32.208783 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8988f826_1349_4c00_9ac1_9540bb868c89.slice/crio-e05d41ee833da2060270611d5273cb4545688c5bf8500fab5303c8652585e987 WatchSource:0}: Error finding container e05d41ee833da2060270611d5273cb4545688c5bf8500fab5303c8652585e987: Status 404 returned error can't find the container with id e05d41ee833da2060270611d5273cb4545688c5bf8500fab5303c8652585e987 Dec 09 12:05:32 crc kubenswrapper[4849]: I1209 12:05:32.210658 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-znxg4/must-gather-mbhmx"] Dec 09 12:05:32 crc kubenswrapper[4849]: I1209 12:05:32.875511 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-znxg4/must-gather-mbhmx" event={"ID":"8988f826-1349-4c00-9ac1-9540bb868c89","Type":"ContainerStarted","Data":"e05d41ee833da2060270611d5273cb4545688c5bf8500fab5303c8652585e987"} Dec 09 12:05:35 crc kubenswrapper[4849]: I1209 12:05:35.536841 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:05:35 crc kubenswrapper[4849]: E1209 12:05:35.537358 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:05:40 crc kubenswrapper[4849]: I1209 12:05:40.960274 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-znxg4/must-gather-mbhmx" event={"ID":"8988f826-1349-4c00-9ac1-9540bb868c89","Type":"ContainerStarted","Data":"cb6bfa29a0bc65fc7785a86886c996a89e2a92e20a3b9db2c2a57ddf12942b90"} Dec 09 12:05:40 crc kubenswrapper[4849]: I1209 12:05:40.960902 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-znxg4/must-gather-mbhmx" event={"ID":"8988f826-1349-4c00-9ac1-9540bb868c89","Type":"ContainerStarted","Data":"9d296a6fb04fa63ff9a3750ab79e50e0053c2605505b1ab3be868a7ea4dcb87d"} Dec 09 12:05:40 crc kubenswrapper[4849]: I1209 12:05:40.985025 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-znxg4/must-gather-mbhmx" podStartSLOduration=2.487338925 podStartE2EDuration="9.985005052s" podCreationTimestamp="2025-12-09 12:05:31 +0000 UTC" firstStartedPulling="2025-12-09 12:05:32.211066018 +0000 UTC m=+2314.750950334" lastFinishedPulling="2025-12-09 12:05:39.708732145 +0000 UTC m=+2322.248616461" observedRunningTime="2025-12-09 12:05:40.979560929 +0000 UTC m=+2323.519445255" watchObservedRunningTime="2025-12-09 12:05:40.985005052 +0000 UTC m=+2323.524889368" Dec 09 12:05:43 crc kubenswrapper[4849]: I1209 12:05:43.699946 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-znxg4/crc-debug-4crq6"] Dec 09 12:05:43 crc kubenswrapper[4849]: I1209 12:05:43.701701 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-znxg4/crc-debug-4crq6" Dec 09 12:05:43 crc kubenswrapper[4849]: I1209 12:05:43.706114 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-znxg4"/"default-dockercfg-l4rvj" Dec 09 12:05:43 crc kubenswrapper[4849]: I1209 12:05:43.877314 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d-host\") pod \"crc-debug-4crq6\" (UID: \"5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d\") " pod="openshift-must-gather-znxg4/crc-debug-4crq6" Dec 09 12:05:43 crc kubenswrapper[4849]: I1209 12:05:43.877562 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knc2s\" (UniqueName: \"kubernetes.io/projected/5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d-kube-api-access-knc2s\") pod \"crc-debug-4crq6\" (UID: \"5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d\") " pod="openshift-must-gather-znxg4/crc-debug-4crq6" Dec 09 12:05:43 crc kubenswrapper[4849]: I1209 12:05:43.978844 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knc2s\" (UniqueName: \"kubernetes.io/projected/5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d-kube-api-access-knc2s\") pod \"crc-debug-4crq6\" (UID: \"5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d\") " pod="openshift-must-gather-znxg4/crc-debug-4crq6" Dec 09 12:05:43 crc kubenswrapper[4849]: I1209 12:05:43.979302 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d-host\") pod \"crc-debug-4crq6\" (UID: \"5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d\") " pod="openshift-must-gather-znxg4/crc-debug-4crq6" Dec 09 12:05:43 crc kubenswrapper[4849]: I1209 12:05:43.979446 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d-host\") pod \"crc-debug-4crq6\" (UID: \"5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d\") " pod="openshift-must-gather-znxg4/crc-debug-4crq6" Dec 09 12:05:44 crc kubenswrapper[4849]: I1209 12:05:44.005350 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knc2s\" (UniqueName: \"kubernetes.io/projected/5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d-kube-api-access-knc2s\") pod \"crc-debug-4crq6\" (UID: \"5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d\") " pod="openshift-must-gather-znxg4/crc-debug-4crq6" Dec 09 12:05:44 crc kubenswrapper[4849]: I1209 12:05:44.019045 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-znxg4/crc-debug-4crq6" Dec 09 12:05:44 crc kubenswrapper[4849]: W1209 12:05:44.051949 4849 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e9b5e2f_8737_42b4_8a44_ba2178e5ab2d.slice/crio-8532e2582ef9d417ffc313697e3212bc75e3cb5d79140ffea371fda9076ab130 WatchSource:0}: Error finding container 8532e2582ef9d417ffc313697e3212bc75e3cb5d79140ffea371fda9076ab130: Status 404 returned error can't find the container with id 8532e2582ef9d417ffc313697e3212bc75e3cb5d79140ffea371fda9076ab130 Dec 09 12:05:44 crc kubenswrapper[4849]: I1209 12:05:44.990599 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-znxg4/crc-debug-4crq6" event={"ID":"5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d","Type":"ContainerStarted","Data":"8532e2582ef9d417ffc313697e3212bc75e3cb5d79140ffea371fda9076ab130"} Dec 09 12:05:50 crc kubenswrapper[4849]: I1209 12:05:50.537125 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:05:50 crc kubenswrapper[4849]: E1209 12:05:50.538283 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:05:56 crc kubenswrapper[4849]: I1209 12:05:56.114468 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-znxg4/crc-debug-4crq6" event={"ID":"5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d","Type":"ContainerStarted","Data":"34476c684fc9e741f2ac9cd132d3528851990a2f883ac8abfbf95f6d5c5106fa"} Dec 09 12:05:56 crc kubenswrapper[4849]: I1209 12:05:56.135669 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-znxg4/crc-debug-4crq6" podStartSLOduration=1.920502017 podStartE2EDuration="13.135619806s" podCreationTimestamp="2025-12-09 12:05:43 +0000 UTC" firstStartedPulling="2025-12-09 12:05:44.056464576 +0000 UTC m=+2326.596348892" lastFinishedPulling="2025-12-09 12:05:55.271582365 +0000 UTC m=+2337.811466681" observedRunningTime="2025-12-09 12:05:56.126039642 +0000 UTC m=+2338.665923958" watchObservedRunningTime="2025-12-09 12:05:56.135619806 +0000 UTC m=+2338.675504122" Dec 09 12:06:03 crc kubenswrapper[4849]: I1209 12:06:03.536685 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:06:03 crc kubenswrapper[4849]: E1209 12:06:03.537366 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:06:14 crc kubenswrapper[4849]: I1209 12:06:14.536374 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:06:14 crc kubenswrapper[4849]: E1209 12:06:14.537735 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:06:17 crc kubenswrapper[4849]: I1209 12:06:17.300770 4849 generic.go:334] "Generic (PLEG): container finished" podID="5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d" containerID="34476c684fc9e741f2ac9cd132d3528851990a2f883ac8abfbf95f6d5c5106fa" exitCode=0 Dec 09 12:06:17 crc kubenswrapper[4849]: I1209 12:06:17.300829 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-znxg4/crc-debug-4crq6" event={"ID":"5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d","Type":"ContainerDied","Data":"34476c684fc9e741f2ac9cd132d3528851990a2f883ac8abfbf95f6d5c5106fa"} Dec 09 12:06:18 crc kubenswrapper[4849]: I1209 12:06:18.404114 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-znxg4/crc-debug-4crq6" Dec 09 12:06:18 crc kubenswrapper[4849]: I1209 12:06:18.438903 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-znxg4/crc-debug-4crq6"] Dec 09 12:06:18 crc kubenswrapper[4849]: I1209 12:06:18.466270 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-znxg4/crc-debug-4crq6"] Dec 09 12:06:18 crc kubenswrapper[4849]: I1209 12:06:18.598935 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knc2s\" (UniqueName: \"kubernetes.io/projected/5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d-kube-api-access-knc2s\") pod \"5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d\" (UID: \"5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d\") " Dec 09 12:06:18 crc kubenswrapper[4849]: I1209 12:06:18.599369 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d-host\") pod \"5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d\" (UID: \"5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d\") " Dec 09 12:06:18 crc kubenswrapper[4849]: I1209 12:06:18.599544 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d-host" (OuterVolumeSpecName: "host") pod "5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d" (UID: "5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:06:18 crc kubenswrapper[4849]: I1209 12:06:18.600103 4849 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d-host\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:18 crc kubenswrapper[4849]: I1209 12:06:18.607583 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d-kube-api-access-knc2s" (OuterVolumeSpecName: "kube-api-access-knc2s") pod "5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d" (UID: "5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d"). InnerVolumeSpecName "kube-api-access-knc2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:18 crc kubenswrapper[4849]: I1209 12:06:18.701889 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knc2s\" (UniqueName: \"kubernetes.io/projected/5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d-kube-api-access-knc2s\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:19 crc kubenswrapper[4849]: I1209 12:06:19.318295 4849 scope.go:117] "RemoveContainer" containerID="34476c684fc9e741f2ac9cd132d3528851990a2f883ac8abfbf95f6d5c5106fa" Dec 09 12:06:19 crc kubenswrapper[4849]: I1209 12:06:19.318358 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-znxg4/crc-debug-4crq6" Dec 09 12:06:19 crc kubenswrapper[4849]: I1209 12:06:19.673483 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-znxg4/crc-debug-r7gpp"] Dec 09 12:06:19 crc kubenswrapper[4849]: E1209 12:06:19.673976 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d" containerName="container-00" Dec 09 12:06:19 crc kubenswrapper[4849]: I1209 12:06:19.673996 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d" containerName="container-00" Dec 09 12:06:19 crc kubenswrapper[4849]: I1209 12:06:19.674193 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d" containerName="container-00" Dec 09 12:06:19 crc kubenswrapper[4849]: I1209 12:06:19.674822 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-znxg4/crc-debug-r7gpp" Dec 09 12:06:19 crc kubenswrapper[4849]: I1209 12:06:19.677126 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-znxg4"/"default-dockercfg-l4rvj" Dec 09 12:06:19 crc kubenswrapper[4849]: I1209 12:06:19.719625 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnq9q\" (UniqueName: \"kubernetes.io/projected/b564e738-acfe-425a-91aa-233cc7ae3580-kube-api-access-lnq9q\") pod \"crc-debug-r7gpp\" (UID: \"b564e738-acfe-425a-91aa-233cc7ae3580\") " pod="openshift-must-gather-znxg4/crc-debug-r7gpp" Dec 09 12:06:19 crc kubenswrapper[4849]: I1209 12:06:19.719696 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b564e738-acfe-425a-91aa-233cc7ae3580-host\") pod \"crc-debug-r7gpp\" (UID: \"b564e738-acfe-425a-91aa-233cc7ae3580\") " pod="openshift-must-gather-znxg4/crc-debug-r7gpp" Dec 09 12:06:19 crc kubenswrapper[4849]: I1209 12:06:19.821250 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b564e738-acfe-425a-91aa-233cc7ae3580-host\") pod \"crc-debug-r7gpp\" (UID: \"b564e738-acfe-425a-91aa-233cc7ae3580\") " pod="openshift-must-gather-znxg4/crc-debug-r7gpp" Dec 09 12:06:19 crc kubenswrapper[4849]: I1209 12:06:19.821394 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnq9q\" (UniqueName: \"kubernetes.io/projected/b564e738-acfe-425a-91aa-233cc7ae3580-kube-api-access-lnq9q\") pod \"crc-debug-r7gpp\" (UID: \"b564e738-acfe-425a-91aa-233cc7ae3580\") " pod="openshift-must-gather-znxg4/crc-debug-r7gpp" Dec 09 12:06:19 crc kubenswrapper[4849]: I1209 12:06:19.821489 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b564e738-acfe-425a-91aa-233cc7ae3580-host\") pod \"crc-debug-r7gpp\" (UID: \"b564e738-acfe-425a-91aa-233cc7ae3580\") " pod="openshift-must-gather-znxg4/crc-debug-r7gpp" Dec 09 12:06:19 crc kubenswrapper[4849]: I1209 12:06:19.848161 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnq9q\" (UniqueName: \"kubernetes.io/projected/b564e738-acfe-425a-91aa-233cc7ae3580-kube-api-access-lnq9q\") pod \"crc-debug-r7gpp\" (UID: \"b564e738-acfe-425a-91aa-233cc7ae3580\") " pod="openshift-must-gather-znxg4/crc-debug-r7gpp" Dec 09 12:06:19 crc kubenswrapper[4849]: I1209 12:06:19.991456 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-znxg4/crc-debug-r7gpp" Dec 09 12:06:20 crc kubenswrapper[4849]: I1209 12:06:20.328511 4849 generic.go:334] "Generic (PLEG): container finished" podID="b564e738-acfe-425a-91aa-233cc7ae3580" containerID="86f058a4eb2c648b3e8158a34d713858be2df6cb034dfd5ba19c4d15018851bd" exitCode=1 Dec 09 12:06:20 crc kubenswrapper[4849]: I1209 12:06:20.328620 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-znxg4/crc-debug-r7gpp" event={"ID":"b564e738-acfe-425a-91aa-233cc7ae3580","Type":"ContainerDied","Data":"86f058a4eb2c648b3e8158a34d713858be2df6cb034dfd5ba19c4d15018851bd"} Dec 09 12:06:20 crc kubenswrapper[4849]: I1209 12:06:20.328917 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-znxg4/crc-debug-r7gpp" event={"ID":"b564e738-acfe-425a-91aa-233cc7ae3580","Type":"ContainerStarted","Data":"0957ae6342e7a2f65fc55f1ea59c6b15e3b8aa21208ed4a1ecbcea6d6eebe4b7"} Dec 09 12:06:20 crc kubenswrapper[4849]: I1209 12:06:20.412585 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-znxg4/crc-debug-r7gpp"] Dec 09 12:06:20 crc kubenswrapper[4849]: I1209 12:06:20.419136 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-znxg4/crc-debug-r7gpp"] Dec 09 12:06:20 crc kubenswrapper[4849]: I1209 12:06:20.582740 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d" path="/var/lib/kubelet/pods/5e9b5e2f-8737-42b4-8a44-ba2178e5ab2d/volumes" Dec 09 12:06:21 crc kubenswrapper[4849]: I1209 12:06:21.484649 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-znxg4/crc-debug-r7gpp" Dec 09 12:06:21 crc kubenswrapper[4849]: I1209 12:06:21.593459 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b564e738-acfe-425a-91aa-233cc7ae3580-host\") pod \"b564e738-acfe-425a-91aa-233cc7ae3580\" (UID: \"b564e738-acfe-425a-91aa-233cc7ae3580\") " Dec 09 12:06:21 crc kubenswrapper[4849]: I1209 12:06:21.593609 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b564e738-acfe-425a-91aa-233cc7ae3580-host" (OuterVolumeSpecName: "host") pod "b564e738-acfe-425a-91aa-233cc7ae3580" (UID: "b564e738-acfe-425a-91aa-233cc7ae3580"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:06:21 crc kubenswrapper[4849]: I1209 12:06:21.593918 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnq9q\" (UniqueName: \"kubernetes.io/projected/b564e738-acfe-425a-91aa-233cc7ae3580-kube-api-access-lnq9q\") pod \"b564e738-acfe-425a-91aa-233cc7ae3580\" (UID: \"b564e738-acfe-425a-91aa-233cc7ae3580\") " Dec 09 12:06:21 crc kubenswrapper[4849]: I1209 12:06:21.594560 4849 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b564e738-acfe-425a-91aa-233cc7ae3580-host\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:21 crc kubenswrapper[4849]: I1209 12:06:21.599272 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b564e738-acfe-425a-91aa-233cc7ae3580-kube-api-access-lnq9q" (OuterVolumeSpecName: "kube-api-access-lnq9q") pod "b564e738-acfe-425a-91aa-233cc7ae3580" (UID: "b564e738-acfe-425a-91aa-233cc7ae3580"). InnerVolumeSpecName "kube-api-access-lnq9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:21 crc kubenswrapper[4849]: I1209 12:06:21.696104 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnq9q\" (UniqueName: \"kubernetes.io/projected/b564e738-acfe-425a-91aa-233cc7ae3580-kube-api-access-lnq9q\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:22 crc kubenswrapper[4849]: I1209 12:06:22.385011 4849 scope.go:117] "RemoveContainer" containerID="86f058a4eb2c648b3e8158a34d713858be2df6cb034dfd5ba19c4d15018851bd" Dec 09 12:06:22 crc kubenswrapper[4849]: I1209 12:06:22.385038 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-znxg4/crc-debug-r7gpp" Dec 09 12:06:22 crc kubenswrapper[4849]: I1209 12:06:22.546299 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b564e738-acfe-425a-91aa-233cc7ae3580" path="/var/lib/kubelet/pods/b564e738-acfe-425a-91aa-233cc7ae3580/volumes" Dec 09 12:06:26 crc kubenswrapper[4849]: I1209 12:06:26.536599 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:06:26 crc kubenswrapper[4849]: E1209 12:06:26.537379 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:06:38 crc kubenswrapper[4849]: I1209 12:06:38.548563 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:06:38 crc kubenswrapper[4849]: E1209 12:06:38.549316 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:06:52 crc kubenswrapper[4849]: I1209 12:06:52.536475 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:06:52 crc kubenswrapper[4849]: E1209 12:06:52.537279 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:07:03 crc kubenswrapper[4849]: I1209 12:07:03.537402 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:07:03 crc kubenswrapper[4849]: E1209 12:07:03.538273 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:07:05 crc kubenswrapper[4849]: I1209 12:07:05.543020 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b9f4d9d4-l4khd_90d4aeb9-b28b-4315-9bbf-aab0e5247d9a/barbican-api/0.log" Dec 09 12:07:05 crc kubenswrapper[4849]: I1209 12:07:05.600609 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b9f4d9d4-l4khd_90d4aeb9-b28b-4315-9bbf-aab0e5247d9a/barbican-api-log/0.log" Dec 09 12:07:05 crc kubenswrapper[4849]: I1209 12:07:05.753199 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-846cbdf45b-jlcvr_1b8073bc-e628-45c2-8d54-a455f73261af/barbican-keystone-listener/0.log" Dec 09 12:07:05 crc kubenswrapper[4849]: I1209 12:07:05.860812 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-846cbdf45b-jlcvr_1b8073bc-e628-45c2-8d54-a455f73261af/barbican-keystone-listener-log/0.log" Dec 09 12:07:05 crc kubenswrapper[4849]: I1209 12:07:05.933319 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-69b864d45f-rvq8j_54f7b258-dafd-4c17-85d7-457129b212de/barbican-worker/0.log" Dec 09 12:07:05 crc kubenswrapper[4849]: I1209 12:07:05.987239 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-69b864d45f-rvq8j_54f7b258-dafd-4c17-85d7-457129b212de/barbican-worker-log/0.log" Dec 09 12:07:06 crc kubenswrapper[4849]: I1209 12:07:06.146063 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-8zp5z_04376a83-eea2-4010-8403-0852cbf6b7de/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 12:07:06 crc kubenswrapper[4849]: I1209 12:07:06.250475 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4db3e0c4-4bd4-4096-9186-49c7b7d371a7/ceilometer-central-agent/0.log" Dec 09 12:07:06 crc kubenswrapper[4849]: I1209 12:07:06.411310 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4db3e0c4-4bd4-4096-9186-49c7b7d371a7/proxy-httpd/0.log" Dec 09 12:07:06 crc kubenswrapper[4849]: I1209 12:07:06.430604 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4db3e0c4-4bd4-4096-9186-49c7b7d371a7/ceilometer-notification-agent/0.log" Dec 09 12:07:06 crc kubenswrapper[4849]: I1209 12:07:06.525028 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4db3e0c4-4bd4-4096-9186-49c7b7d371a7/sg-core/0.log" Dec 09 12:07:06 crc kubenswrapper[4849]: I1209 12:07:06.632937 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-j28nk_dc9bff1c-d856-4cab-9c39-19d8106e6a35/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 12:07:06 crc kubenswrapper[4849]: I1209 12:07:06.726448 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e/cinder-api/0.log" Dec 09 12:07:06 crc kubenswrapper[4849]: I1209 12:07:06.805485 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4f98f2cd-8b2b-48c2-8588-ac49f5c5f09e/cinder-api-log/0.log" Dec 09 12:07:06 crc kubenswrapper[4849]: I1209 12:07:06.936307 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_292bc586-9fad-4698-b31f-e65e317ef940/cinder-scheduler/0.log" Dec 09 12:07:07 crc kubenswrapper[4849]: I1209 12:07:07.018619 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_292bc586-9fad-4698-b31f-e65e317ef940/probe/0.log" Dec 09 12:07:07 crc kubenswrapper[4849]: I1209 12:07:07.172980 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-f6lr5_fa008105-59e6-48d8-9b1c-c8d65ad51d31/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 12:07:07 crc kubenswrapper[4849]: I1209 12:07:07.280165 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-lzn87_321f3d04-9b3c-456d-b13c-6db5d42dedb7/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 12:07:07 crc kubenswrapper[4849]: I1209 12:07:07.401150 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-fb68d687f-pq4dx_1136dca8-4c2e-45f6-81bf-0b990a6af3b7/init/0.log" Dec 09 12:07:07 crc kubenswrapper[4849]: I1209 12:07:07.668811 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-fb68d687f-pq4dx_1136dca8-4c2e-45f6-81bf-0b990a6af3b7/dnsmasq-dns/0.log" Dec 09 12:07:07 crc kubenswrapper[4849]: I1209 12:07:07.675514 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-fb68d687f-pq4dx_1136dca8-4c2e-45f6-81bf-0b990a6af3b7/init/0.log" Dec 09 12:07:07 crc kubenswrapper[4849]: I1209 12:07:07.744793 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-6shhl_bf13f211-fc25-44b6-bdee-e6b92c4102c4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 12:07:07 crc kubenswrapper[4849]: I1209 12:07:07.908067 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-76d4cfc555-fqqzj_f07ea8eb-8b14-491f-bf4a-f7409628ae82/keystone-api/0.log" Dec 09 12:07:07 crc kubenswrapper[4849]: I1209 12:07:07.982209 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29421361-j9tc6_41f58e35-612c-46b5-a376-d59343443e53/keystone-cron/0.log" Dec 09 12:07:08 crc kubenswrapper[4849]: I1209 12:07:08.108670 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9ed08f18-45d0-4623-848f-ebfacdb7b421/kube-state-metrics/0.log" Dec 09 12:07:08 crc kubenswrapper[4849]: I1209 12:07:08.489483 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-c689fb97-j4mnm_8ae2f3e7-3db7-4477-8c03-8c8817fe17d3/neutron-httpd/0.log" Dec 09 12:07:08 crc kubenswrapper[4849]: I1209 12:07:08.548306 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-c689fb97-j4mnm_8ae2f3e7-3db7-4477-8c03-8c8817fe17d3/neutron-api/0.log" Dec 09 12:07:09 crc kubenswrapper[4849]: I1209 12:07:09.007319 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a98f6160-0455-4d4f-adfa-d01a4a2f1edc/nova-api-log/0.log" Dec 09 12:07:09 crc kubenswrapper[4849]: I1209 12:07:09.058181 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a98f6160-0455-4d4f-adfa-d01a4a2f1edc/nova-api-api/0.log" Dec 09 12:07:09 crc kubenswrapper[4849]: I1209 12:07:09.376968 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_460a5c7d-f40b-44eb-a861-b5b12c72d128/nova-cell0-conductor-conductor/0.log" Dec 09 12:07:09 crc kubenswrapper[4849]: I1209 12:07:09.516598 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1669bf2d-c24f-46a6-9cdf-1f28689a44b2/nova-cell1-conductor-conductor/0.log" Dec 09 12:07:09 crc kubenswrapper[4849]: I1209 12:07:09.809218 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_544ee850-6363-4bb9-89d8-c9160c2d850a/nova-cell1-novncproxy-novncproxy/0.log" Dec 09 12:07:10 crc kubenswrapper[4849]: I1209 12:07:10.105672 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4b8f2bbe-7bf5-429e-835d-15cd1e039456/nova-metadata-log/0.log" Dec 09 12:07:10 crc kubenswrapper[4849]: I1209 12:07:10.483944 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_574c9a8a-6aaf-4344-b566-039bf65b788d/mysql-bootstrap/0.log" Dec 09 12:07:10 crc kubenswrapper[4849]: I1209 12:07:10.512235 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c72aad0e-4358-4166-b17b-2114ea10bae1/nova-scheduler-scheduler/0.log" Dec 09 12:07:10 crc kubenswrapper[4849]: I1209 12:07:10.588217 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4b8f2bbe-7bf5-429e-835d-15cd1e039456/nova-metadata-metadata/0.log" Dec 09 12:07:10 crc kubenswrapper[4849]: I1209 12:07:10.788589 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_574c9a8a-6aaf-4344-b566-039bf65b788d/mysql-bootstrap/0.log" Dec 09 12:07:10 crc kubenswrapper[4849]: I1209 12:07:10.924546 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f78d8a52-1a90-4413-acb9-3925dfa4f1f0/mysql-bootstrap/0.log" Dec 09 12:07:10 crc kubenswrapper[4849]: I1209 12:07:10.996811 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_574c9a8a-6aaf-4344-b566-039bf65b788d/galera/0.log" Dec 09 12:07:11 crc kubenswrapper[4849]: I1209 12:07:11.222530 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f78d8a52-1a90-4413-acb9-3925dfa4f1f0/mysql-bootstrap/0.log" Dec 09 12:07:11 crc kubenswrapper[4849]: I1209 12:07:11.232027 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f78d8a52-1a90-4413-acb9-3925dfa4f1f0/galera/0.log" Dec 09 12:07:11 crc kubenswrapper[4849]: I1209 12:07:11.294571 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_39476746-c540-4e2e-b31c-de35ea3d9ec1/openstackclient/0.log" Dec 09 12:07:11 crc kubenswrapper[4849]: I1209 12:07:11.483926 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-czrmh_69a39d69-d705-4246-bc77-cbdd3fadfefa/ovn-controller/0.log" Dec 09 12:07:11 crc kubenswrapper[4849]: I1209 12:07:11.532786 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-s5gch_edb06c44-5bf3-44c9-8db2-9e9f1b6bab2c/openstack-network-exporter/0.log" Dec 09 12:07:12 crc kubenswrapper[4849]: I1209 12:07:12.031477 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-chw84_47f40834-5de4-472b-a069-579d98cff69e/ovsdb-server-init/0.log" Dec 09 12:07:12 crc kubenswrapper[4849]: I1209 12:07:12.269218 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-chw84_47f40834-5de4-472b-a069-579d98cff69e/ovs-vswitchd/0.log" Dec 09 12:07:12 crc kubenswrapper[4849]: I1209 12:07:12.284830 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-chw84_47f40834-5de4-472b-a069-579d98cff69e/ovsdb-server-init/0.log" Dec 09 12:07:12 crc kubenswrapper[4849]: I1209 12:07:12.351135 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-chw84_47f40834-5de4-472b-a069-579d98cff69e/ovsdb-server/0.log" Dec 09 12:07:12 crc kubenswrapper[4849]: I1209 12:07:12.510715 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c816f3aa-88ea-408d-a6e1-dd1e962688c6/openstack-network-exporter/0.log" Dec 09 12:07:12 crc kubenswrapper[4849]: I1209 12:07:12.546898 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c816f3aa-88ea-408d-a6e1-dd1e962688c6/ovn-northd/0.log" Dec 09 12:07:12 crc kubenswrapper[4849]: I1209 12:07:12.732476 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bbb8ec61-588d-43ff-8597-eddb7a747106/openstack-network-exporter/0.log" Dec 09 12:07:12 crc kubenswrapper[4849]: I1209 12:07:12.779907 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bbb8ec61-588d-43ff-8597-eddb7a747106/ovsdbserver-nb/0.log" Dec 09 12:07:12 crc kubenswrapper[4849]: I1209 12:07:12.898725 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_40314306-27de-4c9d-ab86-7499d56d57c6/openstack-network-exporter/0.log" Dec 09 12:07:13 crc kubenswrapper[4849]: I1209 12:07:13.017688 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_40314306-27de-4c9d-ab86-7499d56d57c6/ovsdbserver-sb/0.log" Dec 09 12:07:13 crc kubenswrapper[4849]: I1209 12:07:13.156575 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77d49689b4-4nl2p_d11a8e2b-c868-44e0-a5a5-56267d22e4b4/placement-api/0.log" Dec 09 12:07:13 crc kubenswrapper[4849]: I1209 12:07:13.301837 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77d49689b4-4nl2p_d11a8e2b-c868-44e0-a5a5-56267d22e4b4/placement-log/0.log" Dec 09 12:07:13 crc kubenswrapper[4849]: I1209 12:07:13.350004 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ec518407-e004-4dde-8a57-91307009b4a3/setup-container/0.log" Dec 09 12:07:13 crc kubenswrapper[4849]: I1209 12:07:13.606945 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ec518407-e004-4dde-8a57-91307009b4a3/rabbitmq/0.log" Dec 09 12:07:13 crc kubenswrapper[4849]: I1209 12:07:13.662490 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b6effe5a-3a21-4f55-905d-7f275cbe1f8f/setup-container/0.log" Dec 09 12:07:13 crc kubenswrapper[4849]: I1209 12:07:13.675362 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ec518407-e004-4dde-8a57-91307009b4a3/setup-container/0.log" Dec 09 12:07:13 crc kubenswrapper[4849]: I1209 12:07:13.934156 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b6effe5a-3a21-4f55-905d-7f275cbe1f8f/setup-container/0.log" Dec 09 12:07:13 crc kubenswrapper[4849]: I1209 12:07:13.937880 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b6effe5a-3a21-4f55-905d-7f275cbe1f8f/rabbitmq/0.log" Dec 09 12:07:14 crc kubenswrapper[4849]: I1209 12:07:14.024298 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-g597m_97c83fff-a07f-484a-8f87-d15c41ebb56a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 12:07:14 crc kubenswrapper[4849]: I1209 12:07:14.208858 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fqll6_e366a1ff-a008-4f60-ba19-c4628338ab7d/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 12:07:14 crc kubenswrapper[4849]: I1209 12:07:14.268488 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-f2l95_4cd4e831-b4c0-4217-a69d-6927c605d0d5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 12:07:14 crc kubenswrapper[4849]: I1209 12:07:14.501228 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d9617b32-ad2b-4bd3-a0d1-5ca6af5569ce/memcached/0.log" Dec 09 12:07:14 crc kubenswrapper[4849]: I1209 12:07:14.575467 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-sdzhv_6393e5b3-a774-4273-8306-333ba2fb51ac/ssh-known-hosts-edpm-deployment/0.log" Dec 09 12:07:14 crc kubenswrapper[4849]: I1209 12:07:14.638025 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4z2rz_fa05da2f-5d37-4c32-a2c5-e30019999c60/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 12:07:15 crc kubenswrapper[4849]: I1209 12:07:15.536914 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:07:15 crc kubenswrapper[4849]: E1209 12:07:15.537156 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:07:18 crc kubenswrapper[4849]: I1209 12:07:18.784961 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v9rb2"] Dec 09 12:07:18 crc kubenswrapper[4849]: E1209 12:07:18.787053 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b564e738-acfe-425a-91aa-233cc7ae3580" containerName="container-00" Dec 09 12:07:18 crc kubenswrapper[4849]: I1209 12:07:18.787172 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="b564e738-acfe-425a-91aa-233cc7ae3580" containerName="container-00" Dec 09 12:07:18 crc kubenswrapper[4849]: I1209 12:07:18.787545 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="b564e738-acfe-425a-91aa-233cc7ae3580" containerName="container-00" Dec 09 12:07:18 crc kubenswrapper[4849]: I1209 12:07:18.789296 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:18 crc kubenswrapper[4849]: I1209 12:07:18.868215 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9rb2"] Dec 09 12:07:18 crc kubenswrapper[4849]: I1209 12:07:18.935558 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/896c6168-6db7-44c5-87ca-53b2914e7fa0-catalog-content\") pod \"redhat-marketplace-v9rb2\" (UID: \"896c6168-6db7-44c5-87ca-53b2914e7fa0\") " pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:18 crc kubenswrapper[4849]: I1209 12:07:18.935607 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/896c6168-6db7-44c5-87ca-53b2914e7fa0-utilities\") pod \"redhat-marketplace-v9rb2\" (UID: \"896c6168-6db7-44c5-87ca-53b2914e7fa0\") " pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:18 crc kubenswrapper[4849]: I1209 12:07:18.935717 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt8mv\" (UniqueName: \"kubernetes.io/projected/896c6168-6db7-44c5-87ca-53b2914e7fa0-kube-api-access-bt8mv\") pod \"redhat-marketplace-v9rb2\" (UID: \"896c6168-6db7-44c5-87ca-53b2914e7fa0\") " pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:19 crc kubenswrapper[4849]: I1209 12:07:19.037054 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/896c6168-6db7-44c5-87ca-53b2914e7fa0-catalog-content\") pod \"redhat-marketplace-v9rb2\" (UID: \"896c6168-6db7-44c5-87ca-53b2914e7fa0\") " pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:19 crc kubenswrapper[4849]: I1209 12:07:19.037117 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/896c6168-6db7-44c5-87ca-53b2914e7fa0-utilities\") pod \"redhat-marketplace-v9rb2\" (UID: \"896c6168-6db7-44c5-87ca-53b2914e7fa0\") " pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:19 crc kubenswrapper[4849]: I1209 12:07:19.037741 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/896c6168-6db7-44c5-87ca-53b2914e7fa0-catalog-content\") pod \"redhat-marketplace-v9rb2\" (UID: \"896c6168-6db7-44c5-87ca-53b2914e7fa0\") " pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:19 crc kubenswrapper[4849]: I1209 12:07:19.037793 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/896c6168-6db7-44c5-87ca-53b2914e7fa0-utilities\") pod \"redhat-marketplace-v9rb2\" (UID: \"896c6168-6db7-44c5-87ca-53b2914e7fa0\") " pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:19 crc kubenswrapper[4849]: I1209 12:07:19.038099 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt8mv\" (UniqueName: \"kubernetes.io/projected/896c6168-6db7-44c5-87ca-53b2914e7fa0-kube-api-access-bt8mv\") pod \"redhat-marketplace-v9rb2\" (UID: \"896c6168-6db7-44c5-87ca-53b2914e7fa0\") " pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:19 crc kubenswrapper[4849]: I1209 12:07:19.073836 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt8mv\" (UniqueName: \"kubernetes.io/projected/896c6168-6db7-44c5-87ca-53b2914e7fa0-kube-api-access-bt8mv\") pod \"redhat-marketplace-v9rb2\" (UID: \"896c6168-6db7-44c5-87ca-53b2914e7fa0\") " pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:19 crc kubenswrapper[4849]: I1209 12:07:19.111537 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:19 crc kubenswrapper[4849]: I1209 12:07:19.654940 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9rb2"] Dec 09 12:07:19 crc kubenswrapper[4849]: I1209 12:07:19.851056 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9rb2" event={"ID":"896c6168-6db7-44c5-87ca-53b2914e7fa0","Type":"ContainerStarted","Data":"fae2c1d4a7f5f2c5c0b9f65bc03ea3317038dbe810070635eb0baf7462f69cfb"} Dec 09 12:07:19 crc kubenswrapper[4849]: I1209 12:07:19.851353 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9rb2" event={"ID":"896c6168-6db7-44c5-87ca-53b2914e7fa0","Type":"ContainerStarted","Data":"7f7a61c69170a3b0895d85624ed7f0ad8fa633575a9a544b32b6502c3c497312"} Dec 09 12:07:20 crc kubenswrapper[4849]: I1209 12:07:20.861995 4849 generic.go:334] "Generic (PLEG): container finished" podID="896c6168-6db7-44c5-87ca-53b2914e7fa0" containerID="fae2c1d4a7f5f2c5c0b9f65bc03ea3317038dbe810070635eb0baf7462f69cfb" exitCode=0 Dec 09 12:07:20 crc kubenswrapper[4849]: I1209 12:07:20.862123 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9rb2" event={"ID":"896c6168-6db7-44c5-87ca-53b2914e7fa0","Type":"ContainerDied","Data":"fae2c1d4a7f5f2c5c0b9f65bc03ea3317038dbe810070635eb0baf7462f69cfb"} Dec 09 12:07:21 crc kubenswrapper[4849]: I1209 12:07:21.884262 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9rb2" event={"ID":"896c6168-6db7-44c5-87ca-53b2914e7fa0","Type":"ContainerStarted","Data":"83e4b6771be8852679e89e423b375729f0b0b0e8cdd659d01e4782a6e0d6d16b"} Dec 09 12:07:22 crc kubenswrapper[4849]: I1209 12:07:22.893670 4849 generic.go:334] "Generic (PLEG): container finished" podID="896c6168-6db7-44c5-87ca-53b2914e7fa0" containerID="83e4b6771be8852679e89e423b375729f0b0b0e8cdd659d01e4782a6e0d6d16b" exitCode=0 Dec 09 12:07:22 crc kubenswrapper[4849]: I1209 12:07:22.893722 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9rb2" event={"ID":"896c6168-6db7-44c5-87ca-53b2914e7fa0","Type":"ContainerDied","Data":"83e4b6771be8852679e89e423b375729f0b0b0e8cdd659d01e4782a6e0d6d16b"} Dec 09 12:07:23 crc kubenswrapper[4849]: I1209 12:07:23.904194 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9rb2" event={"ID":"896c6168-6db7-44c5-87ca-53b2914e7fa0","Type":"ContainerStarted","Data":"6bc7d3311661dc3e7a0ba2e687f9470fbcad8134c1710442e4aa2ca2b5c1a588"} Dec 09 12:07:28 crc kubenswrapper[4849]: I1209 12:07:28.550191 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:07:28 crc kubenswrapper[4849]: E1209 12:07:28.551385 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:07:29 crc kubenswrapper[4849]: I1209 12:07:29.112131 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:29 crc kubenswrapper[4849]: I1209 12:07:29.112178 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:29 crc kubenswrapper[4849]: I1209 12:07:29.175939 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:29 crc kubenswrapper[4849]: I1209 12:07:29.208339 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v9rb2" podStartSLOduration=8.78334972 podStartE2EDuration="11.208319146s" podCreationTimestamp="2025-12-09 12:07:18 +0000 UTC" firstStartedPulling="2025-12-09 12:07:20.868074481 +0000 UTC m=+2423.407958797" lastFinishedPulling="2025-12-09 12:07:23.293043907 +0000 UTC m=+2425.832928223" observedRunningTime="2025-12-09 12:07:23.930977136 +0000 UTC m=+2426.470861472" watchObservedRunningTime="2025-12-09 12:07:29.208319146 +0000 UTC m=+2431.748203482" Dec 09 12:07:29 crc kubenswrapper[4849]: I1209 12:07:29.992080 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:30 crc kubenswrapper[4849]: I1209 12:07:30.044761 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9rb2"] Dec 09 12:07:31 crc kubenswrapper[4849]: I1209 12:07:31.986820 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v9rb2" podUID="896c6168-6db7-44c5-87ca-53b2914e7fa0" containerName="registry-server" containerID="cri-o://6bc7d3311661dc3e7a0ba2e687f9470fbcad8134c1710442e4aa2ca2b5c1a588" gracePeriod=2 Dec 09 12:07:32 crc kubenswrapper[4849]: I1209 12:07:32.462889 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:32 crc kubenswrapper[4849]: I1209 12:07:32.604813 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt8mv\" (UniqueName: \"kubernetes.io/projected/896c6168-6db7-44c5-87ca-53b2914e7fa0-kube-api-access-bt8mv\") pod \"896c6168-6db7-44c5-87ca-53b2914e7fa0\" (UID: \"896c6168-6db7-44c5-87ca-53b2914e7fa0\") " Dec 09 12:07:32 crc kubenswrapper[4849]: I1209 12:07:32.604874 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/896c6168-6db7-44c5-87ca-53b2914e7fa0-catalog-content\") pod \"896c6168-6db7-44c5-87ca-53b2914e7fa0\" (UID: \"896c6168-6db7-44c5-87ca-53b2914e7fa0\") " Dec 09 12:07:32 crc kubenswrapper[4849]: I1209 12:07:32.604956 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/896c6168-6db7-44c5-87ca-53b2914e7fa0-utilities\") pod \"896c6168-6db7-44c5-87ca-53b2914e7fa0\" (UID: \"896c6168-6db7-44c5-87ca-53b2914e7fa0\") " Dec 09 12:07:32 crc kubenswrapper[4849]: I1209 12:07:32.606255 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/896c6168-6db7-44c5-87ca-53b2914e7fa0-utilities" (OuterVolumeSpecName: "utilities") pod "896c6168-6db7-44c5-87ca-53b2914e7fa0" (UID: "896c6168-6db7-44c5-87ca-53b2914e7fa0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:07:32 crc kubenswrapper[4849]: I1209 12:07:32.627089 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/896c6168-6db7-44c5-87ca-53b2914e7fa0-kube-api-access-bt8mv" (OuterVolumeSpecName: "kube-api-access-bt8mv") pod "896c6168-6db7-44c5-87ca-53b2914e7fa0" (UID: "896c6168-6db7-44c5-87ca-53b2914e7fa0"). InnerVolumeSpecName "kube-api-access-bt8mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:07:32 crc kubenswrapper[4849]: I1209 12:07:32.636185 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/896c6168-6db7-44c5-87ca-53b2914e7fa0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "896c6168-6db7-44c5-87ca-53b2914e7fa0" (UID: "896c6168-6db7-44c5-87ca-53b2914e7fa0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:07:32 crc kubenswrapper[4849]: I1209 12:07:32.707617 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt8mv\" (UniqueName: \"kubernetes.io/projected/896c6168-6db7-44c5-87ca-53b2914e7fa0-kube-api-access-bt8mv\") on node \"crc\" DevicePath \"\"" Dec 09 12:07:32 crc kubenswrapper[4849]: I1209 12:07:32.707647 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/896c6168-6db7-44c5-87ca-53b2914e7fa0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:07:32 crc kubenswrapper[4849]: I1209 12:07:32.707670 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/896c6168-6db7-44c5-87ca-53b2914e7fa0-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:07:33 crc kubenswrapper[4849]: I1209 12:07:33.004313 4849 generic.go:334] "Generic (PLEG): container finished" podID="896c6168-6db7-44c5-87ca-53b2914e7fa0" containerID="6bc7d3311661dc3e7a0ba2e687f9470fbcad8134c1710442e4aa2ca2b5c1a588" exitCode=0 Dec 09 12:07:33 crc kubenswrapper[4849]: I1209 12:07:33.004583 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9rb2" event={"ID":"896c6168-6db7-44c5-87ca-53b2914e7fa0","Type":"ContainerDied","Data":"6bc7d3311661dc3e7a0ba2e687f9470fbcad8134c1710442e4aa2ca2b5c1a588"} Dec 09 12:07:33 crc kubenswrapper[4849]: I1209 12:07:33.004668 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9rb2" event={"ID":"896c6168-6db7-44c5-87ca-53b2914e7fa0","Type":"ContainerDied","Data":"7f7a61c69170a3b0895d85624ed7f0ad8fa633575a9a544b32b6502c3c497312"} Dec 09 12:07:33 crc kubenswrapper[4849]: I1209 12:07:33.004693 4849 scope.go:117] "RemoveContainer" containerID="6bc7d3311661dc3e7a0ba2e687f9470fbcad8134c1710442e4aa2ca2b5c1a588" Dec 09 12:07:33 crc kubenswrapper[4849]: I1209 12:07:33.004942 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9rb2" Dec 09 12:07:33 crc kubenswrapper[4849]: I1209 12:07:33.031681 4849 scope.go:117] "RemoveContainer" containerID="83e4b6771be8852679e89e423b375729f0b0b0e8cdd659d01e4782a6e0d6d16b" Dec 09 12:07:33 crc kubenswrapper[4849]: I1209 12:07:33.055343 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9rb2"] Dec 09 12:07:33 crc kubenswrapper[4849]: I1209 12:07:33.070777 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9rb2"] Dec 09 12:07:33 crc kubenswrapper[4849]: I1209 12:07:33.081176 4849 scope.go:117] "RemoveContainer" containerID="fae2c1d4a7f5f2c5c0b9f65bc03ea3317038dbe810070635eb0baf7462f69cfb" Dec 09 12:07:33 crc kubenswrapper[4849]: I1209 12:07:33.114940 4849 scope.go:117] "RemoveContainer" containerID="6bc7d3311661dc3e7a0ba2e687f9470fbcad8134c1710442e4aa2ca2b5c1a588" Dec 09 12:07:33 crc kubenswrapper[4849]: E1209 12:07:33.115403 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc7d3311661dc3e7a0ba2e687f9470fbcad8134c1710442e4aa2ca2b5c1a588\": container with ID starting with 6bc7d3311661dc3e7a0ba2e687f9470fbcad8134c1710442e4aa2ca2b5c1a588 not found: ID does not exist" containerID="6bc7d3311661dc3e7a0ba2e687f9470fbcad8134c1710442e4aa2ca2b5c1a588" Dec 09 12:07:33 crc kubenswrapper[4849]: I1209 12:07:33.115446 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc7d3311661dc3e7a0ba2e687f9470fbcad8134c1710442e4aa2ca2b5c1a588"} err="failed to get container status \"6bc7d3311661dc3e7a0ba2e687f9470fbcad8134c1710442e4aa2ca2b5c1a588\": rpc error: code = NotFound desc = could not find container \"6bc7d3311661dc3e7a0ba2e687f9470fbcad8134c1710442e4aa2ca2b5c1a588\": container with ID starting with 6bc7d3311661dc3e7a0ba2e687f9470fbcad8134c1710442e4aa2ca2b5c1a588 not found: ID does not exist" Dec 09 12:07:33 crc kubenswrapper[4849]: I1209 12:07:33.115467 4849 scope.go:117] "RemoveContainer" containerID="83e4b6771be8852679e89e423b375729f0b0b0e8cdd659d01e4782a6e0d6d16b" Dec 09 12:07:33 crc kubenswrapper[4849]: E1209 12:07:33.115730 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83e4b6771be8852679e89e423b375729f0b0b0e8cdd659d01e4782a6e0d6d16b\": container with ID starting with 83e4b6771be8852679e89e423b375729f0b0b0e8cdd659d01e4782a6e0d6d16b not found: ID does not exist" containerID="83e4b6771be8852679e89e423b375729f0b0b0e8cdd659d01e4782a6e0d6d16b" Dec 09 12:07:33 crc kubenswrapper[4849]: I1209 12:07:33.115754 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e4b6771be8852679e89e423b375729f0b0b0e8cdd659d01e4782a6e0d6d16b"} err="failed to get container status \"83e4b6771be8852679e89e423b375729f0b0b0e8cdd659d01e4782a6e0d6d16b\": rpc error: code = NotFound desc = could not find container \"83e4b6771be8852679e89e423b375729f0b0b0e8cdd659d01e4782a6e0d6d16b\": container with ID starting with 83e4b6771be8852679e89e423b375729f0b0b0e8cdd659d01e4782a6e0d6d16b not found: ID does not exist" Dec 09 12:07:33 crc kubenswrapper[4849]: I1209 12:07:33.115768 4849 scope.go:117] "RemoveContainer" containerID="fae2c1d4a7f5f2c5c0b9f65bc03ea3317038dbe810070635eb0baf7462f69cfb" Dec 09 12:07:33 crc kubenswrapper[4849]: E1209 12:07:33.115999 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fae2c1d4a7f5f2c5c0b9f65bc03ea3317038dbe810070635eb0baf7462f69cfb\": container with ID starting with fae2c1d4a7f5f2c5c0b9f65bc03ea3317038dbe810070635eb0baf7462f69cfb not found: ID does not exist" containerID="fae2c1d4a7f5f2c5c0b9f65bc03ea3317038dbe810070635eb0baf7462f69cfb" Dec 09 12:07:33 crc kubenswrapper[4849]: I1209 12:07:33.116025 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae2c1d4a7f5f2c5c0b9f65bc03ea3317038dbe810070635eb0baf7462f69cfb"} err="failed to get container status \"fae2c1d4a7f5f2c5c0b9f65bc03ea3317038dbe810070635eb0baf7462f69cfb\": rpc error: code = NotFound desc = could not find container \"fae2c1d4a7f5f2c5c0b9f65bc03ea3317038dbe810070635eb0baf7462f69cfb\": container with ID starting with fae2c1d4a7f5f2c5c0b9f65bc03ea3317038dbe810070635eb0baf7462f69cfb not found: ID does not exist" Dec 09 12:07:34 crc kubenswrapper[4849]: I1209 12:07:34.548234 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="896c6168-6db7-44c5-87ca-53b2914e7fa0" path="/var/lib/kubelet/pods/896c6168-6db7-44c5-87ca-53b2914e7fa0/volumes" Dec 09 12:07:36 crc kubenswrapper[4849]: I1209 12:07:36.867817 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc_a596136d-71ff-41b2-afcc-5886048a6af9/util/0.log" Dec 09 12:07:37 crc kubenswrapper[4849]: I1209 12:07:37.114949 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc_a596136d-71ff-41b2-afcc-5886048a6af9/pull/0.log" Dec 09 12:07:37 crc kubenswrapper[4849]: I1209 12:07:37.134343 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc_a596136d-71ff-41b2-afcc-5886048a6af9/pull/0.log" Dec 09 12:07:37 crc kubenswrapper[4849]: I1209 12:07:37.154135 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc_a596136d-71ff-41b2-afcc-5886048a6af9/util/0.log" Dec 09 12:07:37 crc kubenswrapper[4849]: I1209 12:07:37.314125 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc_a596136d-71ff-41b2-afcc-5886048a6af9/util/0.log" Dec 09 12:07:37 crc kubenswrapper[4849]: I1209 12:07:37.394471 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc_a596136d-71ff-41b2-afcc-5886048a6af9/extract/0.log" Dec 09 12:07:37 crc kubenswrapper[4849]: I1209 12:07:37.411282 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a109140b4d6e50441b54c96aa41c588b80fdbe205abcd1763e186686fd42hbc_a596136d-71ff-41b2-afcc-5886048a6af9/pull/0.log" Dec 09 12:07:37 crc kubenswrapper[4849]: I1209 12:07:37.563727 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-nkjhr_f24fc0f5-c0b5-4155-874b-34f3cbb0ad25/kube-rbac-proxy/0.log" Dec 09 12:07:37 crc kubenswrapper[4849]: I1209 12:07:37.648929 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-nkjhr_f24fc0f5-c0b5-4155-874b-34f3cbb0ad25/manager/0.log" Dec 09 12:07:37 crc kubenswrapper[4849]: I1209 12:07:37.690710 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-ch4jh_93362b58-a33b-4683-ad57-6b72bb7d8655/kube-rbac-proxy/0.log" Dec 09 12:07:37 crc kubenswrapper[4849]: I1209 12:07:37.833437 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-ch4jh_93362b58-a33b-4683-ad57-6b72bb7d8655/manager/0.log" Dec 09 12:07:37 crc kubenswrapper[4849]: I1209 12:07:37.873571 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-hmntq_526627f5-817a-4f47-a28c-cc3597989b1d/kube-rbac-proxy/0.log" Dec 09 12:07:37 crc kubenswrapper[4849]: I1209 12:07:37.901709 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-hmntq_526627f5-817a-4f47-a28c-cc3597989b1d/manager/0.log" Dec 09 12:07:38 crc kubenswrapper[4849]: I1209 12:07:38.082990 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-xnt5q_577693e5-e4d7-4a4f-be14-41630da8744f/kube-rbac-proxy/0.log" Dec 09 12:07:38 crc kubenswrapper[4849]: I1209 12:07:38.156903 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-xnt5q_577693e5-e4d7-4a4f-be14-41630da8744f/manager/0.log" Dec 09 12:07:38 crc kubenswrapper[4849]: I1209 12:07:38.279311 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-s6jnd_9143dc55-4bce-4cfe-a704-73cf4e65c91f/manager/0.log" Dec 09 12:07:38 crc kubenswrapper[4849]: I1209 12:07:38.286330 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-s6jnd_9143dc55-4bce-4cfe-a704-73cf4e65c91f/kube-rbac-proxy/0.log" Dec 09 12:07:38 crc kubenswrapper[4849]: I1209 12:07:38.380862 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-hzr9p_f891f270-493d-463a-9514-127200c5c495/kube-rbac-proxy/0.log" Dec 09 12:07:38 crc kubenswrapper[4849]: I1209 12:07:38.481116 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-hzr9p_f891f270-493d-463a-9514-127200c5c495/manager/0.log" Dec 09 12:07:38 crc kubenswrapper[4849]: I1209 12:07:38.655489 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-88dmp_d2444ef1-caaa-4c1f-b3ac-a503b340bb87/kube-rbac-proxy/0.log" Dec 09 12:07:38 crc kubenswrapper[4849]: I1209 12:07:38.842356 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-88dmp_d2444ef1-caaa-4c1f-b3ac-a503b340bb87/manager/0.log" Dec 09 12:07:38 crc kubenswrapper[4849]: I1209 12:07:38.867907 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-z694m_16904597-72e8-41f0-8810-cd75ff6af881/kube-rbac-proxy/0.log" Dec 09 12:07:38 crc kubenswrapper[4849]: I1209 12:07:38.953720 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-z694m_16904597-72e8-41f0-8810-cd75ff6af881/manager/0.log" Dec 09 12:07:39 crc kubenswrapper[4849]: I1209 12:07:39.044817 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-ns9dz_f51d531d-7b17-44e5-907d-9272df92466f/kube-rbac-proxy/0.log" Dec 09 12:07:39 crc kubenswrapper[4849]: I1209 12:07:39.157214 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-ns9dz_f51d531d-7b17-44e5-907d-9272df92466f/manager/0.log" Dec 09 12:07:39 crc kubenswrapper[4849]: I1209 12:07:39.297791 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-hr8b8_f671b0c9-9b37-4150-a41c-7c95a969c149/kube-rbac-proxy/0.log" Dec 09 12:07:39 crc kubenswrapper[4849]: I1209 12:07:39.349600 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-hr8b8_f671b0c9-9b37-4150-a41c-7c95a969c149/manager/0.log" Dec 09 12:07:39 crc kubenswrapper[4849]: I1209 12:07:39.552297 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-8tvx7_473b8be0-bc7e-4c51-ab9a-73771a1664c2/kube-rbac-proxy/0.log" Dec 09 12:07:39 crc kubenswrapper[4849]: I1209 12:07:39.590590 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-8tvx7_473b8be0-bc7e-4c51-ab9a-73771a1664c2/manager/0.log" Dec 09 12:07:39 crc kubenswrapper[4849]: I1209 12:07:39.761673 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-5w7tw_40bac272-7e22-45e7-841c-7cdd4f87f1ad/kube-rbac-proxy/0.log" Dec 09 12:07:39 crc kubenswrapper[4849]: I1209 12:07:39.763620 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-5w7tw_40bac272-7e22-45e7-841c-7cdd4f87f1ad/manager/0.log" Dec 09 12:07:39 crc kubenswrapper[4849]: I1209 12:07:39.823495 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-5wpm6_53f856a1-0579-4b0a-8294-a2ffb94bf4e5/kube-rbac-proxy/0.log" Dec 09 12:07:39 crc kubenswrapper[4849]: I1209 12:07:39.991075 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-4hsgp_48eb886e-615e-419e-af3a-28e348e24a13/kube-rbac-proxy/0.log" Dec 09 12:07:40 crc kubenswrapper[4849]: I1209 12:07:40.081726 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-4hsgp_48eb886e-615e-419e-af3a-28e348e24a13/manager/0.log" Dec 09 12:07:40 crc kubenswrapper[4849]: I1209 12:07:40.098632 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-5wpm6_53f856a1-0579-4b0a-8294-a2ffb94bf4e5/manager/0.log" Dec 09 12:07:40 crc kubenswrapper[4849]: I1209 12:07:40.216235 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fm55d8_ed081101-9961-4cf9-9725-0ec764af322b/kube-rbac-proxy/0.log" Dec 09 12:07:40 crc kubenswrapper[4849]: I1209 12:07:40.323010 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fm55d8_ed081101-9961-4cf9-9725-0ec764af322b/manager/0.log" Dec 09 12:07:40 crc kubenswrapper[4849]: I1209 12:07:40.762358 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-bc7998764-95772_6eb46d53-c911-45b6-b66d-982cb5e46f18/operator/0.log" Dec 09 12:07:40 crc kubenswrapper[4849]: I1209 12:07:40.791311 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-r4kw2_5f5f2ad5-e7ac-4940-8c46-bd32cb571127/registry-server/0.log" Dec 09 12:07:41 crc kubenswrapper[4849]: I1209 12:07:41.011610 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-26rfq_bc26bf04-a33a-4314-a0fa-216360ac6d3b/kube-rbac-proxy/0.log" Dec 09 12:07:41 crc kubenswrapper[4849]: I1209 12:07:41.274855 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-26rfq_bc26bf04-a33a-4314-a0fa-216360ac6d3b/manager/0.log" Dec 09 12:07:41 crc kubenswrapper[4849]: I1209 12:07:41.339647 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-6xz62_232105fe-9c4a-438e-bac7-0f13e78fb972/kube-rbac-proxy/0.log" Dec 09 12:07:41 crc kubenswrapper[4849]: I1209 12:07:41.482078 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-6xz62_232105fe-9c4a-438e-bac7-0f13e78fb972/manager/0.log" Dec 09 12:07:41 crc kubenswrapper[4849]: I1209 12:07:41.537447 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cfb8477d8-j2tf9_6b911c78-1753-46a4-a042-c1395c2a73a9/manager/0.log" Dec 09 12:07:41 crc kubenswrapper[4849]: I1209 12:07:41.576317 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-rx9bn_3e922935-a9e7-49ab-bd10-f575e0ab0445/operator/0.log" Dec 09 12:07:41 crc kubenswrapper[4849]: I1209 12:07:41.716075 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-nn66x_42cdfefe-0e9c-4ff8-8447-5153ac692a2d/kube-rbac-proxy/0.log" Dec 09 12:07:41 crc kubenswrapper[4849]: I1209 12:07:41.769546 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-nn66x_42cdfefe-0e9c-4ff8-8447-5153ac692a2d/manager/0.log" Dec 09 12:07:41 crc kubenswrapper[4849]: I1209 12:07:41.803785 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-gnw95_9b389d0f-7f09-4744-b582-cf09ffe3c937/kube-rbac-proxy/0.log" Dec 09 12:07:41 crc kubenswrapper[4849]: I1209 12:07:41.968435 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-xrq2w_ab547409-b5b9-41ba-897d-01bd4d233906/kube-rbac-proxy/0.log" Dec 09 12:07:41 crc kubenswrapper[4849]: I1209 12:07:41.970578 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-gnw95_9b389d0f-7f09-4744-b582-cf09ffe3c937/manager/0.log" Dec 09 12:07:42 crc kubenswrapper[4849]: I1209 12:07:42.027632 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-xrq2w_ab547409-b5b9-41ba-897d-01bd4d233906/manager/0.log" Dec 09 12:07:42 crc kubenswrapper[4849]: I1209 12:07:42.471689 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-667bd8d554-k9vnf_3cd2993d-bfa4-4aae-b11c-cdc46b9671da/kube-rbac-proxy/0.log" Dec 09 12:07:42 crc kubenswrapper[4849]: I1209 12:07:42.481281 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-667bd8d554-k9vnf_3cd2993d-bfa4-4aae-b11c-cdc46b9671da/manager/0.log" Dec 09 12:07:42 crc kubenswrapper[4849]: I1209 12:07:42.536721 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:07:42 crc kubenswrapper[4849]: E1209 12:07:42.536964 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:07:53 crc kubenswrapper[4849]: I1209 12:07:53.537199 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:07:53 crc kubenswrapper[4849]: E1209 12:07:53.538113 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:08:02 crc kubenswrapper[4849]: I1209 12:08:02.959605 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dvfn7_4a0fccbe-ade0-4666-8758-d67f3c74e8e7/control-plane-machine-set-operator/0.log" Dec 09 12:08:03 crc kubenswrapper[4849]: I1209 12:08:03.131804 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zqkl8_d41acaad-c321-4016-8330-f6de9b6e9326/kube-rbac-proxy/0.log" Dec 09 12:08:03 crc kubenswrapper[4849]: I1209 12:08:03.144584 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zqkl8_d41acaad-c321-4016-8330-f6de9b6e9326/machine-api-operator/0.log" Dec 09 12:08:06 crc kubenswrapper[4849]: I1209 12:08:06.539537 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:08:06 crc kubenswrapper[4849]: E1209 12:08:06.540278 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:08:15 crc kubenswrapper[4849]: I1209 12:08:15.645102 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-zrdxp_ef0763a6-1232-4f65-a803-50ed551a126a/cert-manager-controller/0.log" Dec 09 12:08:15 crc kubenswrapper[4849]: I1209 12:08:15.922713 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-n6p9q_9c0dd8aa-7e1e-4af8-aa67-b371f6215b98/cert-manager-webhook/0.log" Dec 09 12:08:15 crc kubenswrapper[4849]: I1209 12:08:15.927367 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-cm7gh_6ace19a4-a4eb-40fa-af3a-b08a2590c64f/cert-manager-cainjector/0.log" Dec 09 12:08:20 crc kubenswrapper[4849]: I1209 12:08:20.536242 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:08:20 crc kubenswrapper[4849]: E1209 12:08:20.537101 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:08:27 crc kubenswrapper[4849]: I1209 12:08:27.844289 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-fsp9r_3c6cd138-dbe0-4baf-a149-341d01905fc8/nmstate-console-plugin/0.log" Dec 09 12:08:27 crc kubenswrapper[4849]: I1209 12:08:27.986718 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mlpgl_6f62a435-b00e-4eba-a243-91c18c9639e4/nmstate-handler/0.log" Dec 09 12:08:28 crc kubenswrapper[4849]: I1209 12:08:28.072829 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-4d6rl_4412c89c-f551-4683-8682-8fc188bf086d/nmstate-metrics/0.log" Dec 09 12:08:28 crc kubenswrapper[4849]: I1209 12:08:28.118867 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-4d6rl_4412c89c-f551-4683-8682-8fc188bf086d/kube-rbac-proxy/0.log" Dec 09 12:08:28 crc kubenswrapper[4849]: I1209 12:08:28.272249 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-c7nwd_6a094363-5e56-4743-99fb-4fc11e2d74cd/nmstate-operator/0.log" Dec 09 12:08:28 crc kubenswrapper[4849]: I1209 12:08:28.344077 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-m2zmv_e6fe8bd2-eeed-4a5f-b2a8-eec7fd6b9518/nmstate-webhook/0.log" Dec 09 12:08:31 crc kubenswrapper[4849]: I1209 12:08:31.536843 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:08:31 crc kubenswrapper[4849]: E1209 12:08:31.537430 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:08:44 crc kubenswrapper[4849]: I1209 12:08:44.004523 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-mdbqt_0470a171-1894-4d83-b3d3-aae6580ef2e1/kube-rbac-proxy/0.log" Dec 09 12:08:44 crc kubenswrapper[4849]: I1209 12:08:44.077625 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-mdbqt_0470a171-1894-4d83-b3d3-aae6580ef2e1/controller/0.log" Dec 09 12:08:44 crc kubenswrapper[4849]: I1209 12:08:44.218773 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/cp-frr-files/0.log" Dec 09 12:08:44 crc kubenswrapper[4849]: I1209 12:08:44.379393 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/cp-frr-files/0.log" Dec 09 12:08:44 crc kubenswrapper[4849]: I1209 12:08:44.426033 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/cp-reloader/0.log" Dec 09 12:08:44 crc kubenswrapper[4849]: I1209 12:08:44.455425 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/cp-reloader/0.log" Dec 09 12:08:44 crc kubenswrapper[4849]: I1209 12:08:44.536968 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/cp-metrics/0.log" Dec 09 12:08:44 crc kubenswrapper[4849]: I1209 12:08:44.835170 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/cp-reloader/0.log" Dec 09 12:08:44 crc kubenswrapper[4849]: I1209 12:08:44.838309 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/cp-frr-files/0.log" Dec 09 12:08:44 crc kubenswrapper[4849]: I1209 12:08:44.849907 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/cp-metrics/0.log" Dec 09 12:08:44 crc kubenswrapper[4849]: I1209 12:08:44.904923 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/cp-metrics/0.log" Dec 09 12:08:45 crc kubenswrapper[4849]: I1209 12:08:45.380766 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/cp-frr-files/0.log" Dec 09 12:08:45 crc kubenswrapper[4849]: I1209 12:08:45.387662 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/cp-metrics/0.log" Dec 09 12:08:45 crc kubenswrapper[4849]: I1209 12:08:45.448023 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/cp-reloader/0.log" Dec 09 12:08:45 crc kubenswrapper[4849]: I1209 12:08:45.505612 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/controller/0.log" Dec 09 12:08:45 crc kubenswrapper[4849]: I1209 12:08:45.536737 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:08:45 crc kubenswrapper[4849]: E1209 12:08:45.536958 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:08:45 crc kubenswrapper[4849]: I1209 12:08:45.596953 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/frr-metrics/0.log" Dec 09 12:08:45 crc kubenswrapper[4849]: I1209 12:08:45.777530 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/kube-rbac-proxy/0.log" Dec 09 12:08:46 crc kubenswrapper[4849]: I1209 12:08:46.037876 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/kube-rbac-proxy-frr/0.log" Dec 09 12:08:46 crc kubenswrapper[4849]: I1209 12:08:46.088892 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/reloader/0.log" Dec 09 12:08:46 crc kubenswrapper[4849]: I1209 12:08:46.302810 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-bsgvl_a39fe675-ad51-4758-a2f3-b911b8a9f5fd/frr-k8s-webhook-server/0.log" Dec 09 12:08:46 crc kubenswrapper[4849]: I1209 12:08:46.579888 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k6bpg_7f4f8e75-d158-487b-872b-4cfa2cb0b98b/frr/0.log" Dec 09 12:08:46 crc kubenswrapper[4849]: I1209 12:08:46.632787 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-9db7cfdf8-7vdzt_19796ce6-f4e9-451a-ba5a-85624de86e77/manager/0.log" Dec 09 12:08:46 crc kubenswrapper[4849]: I1209 12:08:46.736817 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-655d65f479-n7rjg_1c26adb0-81b9-4722-b799-4cc66c301025/webhook-server/0.log" Dec 09 12:08:46 crc kubenswrapper[4849]: I1209 12:08:46.994612 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lxwrr_c3a2373d-8193-43ee-b1de-003115ad48f6/kube-rbac-proxy/0.log" Dec 09 12:08:47 crc kubenswrapper[4849]: I1209 12:08:47.228034 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lxwrr_c3a2373d-8193-43ee-b1de-003115ad48f6/speaker/0.log" Dec 09 12:08:58 crc kubenswrapper[4849]: I1209 12:08:58.546346 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:08:59 crc kubenswrapper[4849]: I1209 12:08:59.751077 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerStarted","Data":"66280f7ade75804b6c0096c7c66616b5d7da643ae6d9d19df6728655528ef876"} Dec 09 12:09:02 crc kubenswrapper[4849]: I1209 12:09:02.112520 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd_102d8ac7-6bbd-4f2b-874d-345a57d9986f/util/0.log" Dec 09 12:09:02 crc kubenswrapper[4849]: I1209 12:09:02.362729 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd_102d8ac7-6bbd-4f2b-874d-345a57d9986f/pull/0.log" Dec 09 12:09:02 crc kubenswrapper[4849]: I1209 12:09:02.444451 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd_102d8ac7-6bbd-4f2b-874d-345a57d9986f/pull/0.log" Dec 09 12:09:02 crc kubenswrapper[4849]: I1209 12:09:02.461221 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd_102d8ac7-6bbd-4f2b-874d-345a57d9986f/util/0.log" Dec 09 12:09:02 crc kubenswrapper[4849]: I1209 12:09:02.628495 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd_102d8ac7-6bbd-4f2b-874d-345a57d9986f/util/0.log" Dec 09 12:09:02 crc kubenswrapper[4849]: I1209 12:09:02.716748 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd_102d8ac7-6bbd-4f2b-874d-345a57d9986f/pull/0.log" Dec 09 12:09:02 crc kubenswrapper[4849]: I1209 12:09:02.793792 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvzrdd_102d8ac7-6bbd-4f2b-874d-345a57d9986f/extract/0.log" Dec 09 12:09:02 crc kubenswrapper[4849]: I1209 12:09:02.878048 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw_3eb93973-472b-4a08-ad39-4638fdbdf108/util/0.log" Dec 09 12:09:03 crc kubenswrapper[4849]: I1209 12:09:03.108826 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw_3eb93973-472b-4a08-ad39-4638fdbdf108/pull/0.log" Dec 09 12:09:03 crc kubenswrapper[4849]: I1209 12:09:03.134458 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw_3eb93973-472b-4a08-ad39-4638fdbdf108/util/0.log" Dec 09 12:09:03 crc kubenswrapper[4849]: I1209 12:09:03.146588 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw_3eb93973-472b-4a08-ad39-4638fdbdf108/pull/0.log" Dec 09 12:09:03 crc kubenswrapper[4849]: I1209 12:09:03.391318 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw_3eb93973-472b-4a08-ad39-4638fdbdf108/pull/0.log" Dec 09 12:09:03 crc kubenswrapper[4849]: I1209 12:09:03.405952 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw_3eb93973-472b-4a08-ad39-4638fdbdf108/util/0.log" Dec 09 12:09:03 crc kubenswrapper[4849]: I1209 12:09:03.475182 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f838h4jw_3eb93973-472b-4a08-ad39-4638fdbdf108/extract/0.log" Dec 09 12:09:03 crc kubenswrapper[4849]: I1209 12:09:03.590945 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dhpb4_e6fc1b93-1648-4dea-b4ed-8eb4e307011a/extract-utilities/0.log" Dec 09 12:09:03 crc kubenswrapper[4849]: I1209 12:09:03.849827 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dhpb4_e6fc1b93-1648-4dea-b4ed-8eb4e307011a/extract-utilities/0.log" Dec 09 12:09:03 crc kubenswrapper[4849]: I1209 12:09:03.883061 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dhpb4_e6fc1b93-1648-4dea-b4ed-8eb4e307011a/extract-content/0.log" Dec 09 12:09:03 crc kubenswrapper[4849]: I1209 12:09:03.894506 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dhpb4_e6fc1b93-1648-4dea-b4ed-8eb4e307011a/extract-content/0.log" Dec 09 12:09:04 crc kubenswrapper[4849]: I1209 12:09:04.077935 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dhpb4_e6fc1b93-1648-4dea-b4ed-8eb4e307011a/extract-content/0.log" Dec 09 12:09:04 crc kubenswrapper[4849]: I1209 12:09:04.117775 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dhpb4_e6fc1b93-1648-4dea-b4ed-8eb4e307011a/extract-utilities/0.log" Dec 09 12:09:04 crc kubenswrapper[4849]: I1209 12:09:04.445000 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cq7jk_22b13fa0-7feb-45d4-8415-1834db2f96c5/extract-utilities/0.log" Dec 09 12:09:04 crc kubenswrapper[4849]: I1209 12:09:04.621700 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dhpb4_e6fc1b93-1648-4dea-b4ed-8eb4e307011a/registry-server/0.log" Dec 09 12:09:04 crc kubenswrapper[4849]: I1209 12:09:04.741887 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cq7jk_22b13fa0-7feb-45d4-8415-1834db2f96c5/extract-content/0.log" Dec 09 12:09:04 crc kubenswrapper[4849]: I1209 12:09:04.752973 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cq7jk_22b13fa0-7feb-45d4-8415-1834db2f96c5/extract-utilities/0.log" Dec 09 12:09:04 crc kubenswrapper[4849]: I1209 12:09:04.821736 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cq7jk_22b13fa0-7feb-45d4-8415-1834db2f96c5/extract-content/0.log" Dec 09 12:09:04 crc kubenswrapper[4849]: I1209 12:09:04.973363 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cq7jk_22b13fa0-7feb-45d4-8415-1834db2f96c5/extract-content/0.log" Dec 09 12:09:05 crc kubenswrapper[4849]: I1209 12:09:05.001235 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cq7jk_22b13fa0-7feb-45d4-8415-1834db2f96c5/extract-utilities/0.log" Dec 09 12:09:05 crc kubenswrapper[4849]: I1209 12:09:05.310035 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fpwrl_6eb652dc-111b-4544-a20e-0c98d451825d/marketplace-operator/0.log" Dec 09 12:09:05 crc kubenswrapper[4849]: I1209 12:09:05.373569 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cq7jk_22b13fa0-7feb-45d4-8415-1834db2f96c5/registry-server/0.log" Dec 09 12:09:05 crc kubenswrapper[4849]: I1209 12:09:05.425062 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fw7ws_1d0053b5-2860-49fc-98d9-a9d08c9d6b19/extract-utilities/0.log" Dec 09 12:09:05 crc kubenswrapper[4849]: I1209 12:09:05.619088 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fw7ws_1d0053b5-2860-49fc-98d9-a9d08c9d6b19/extract-content/0.log" Dec 09 12:09:05 crc kubenswrapper[4849]: I1209 12:09:05.637547 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fw7ws_1d0053b5-2860-49fc-98d9-a9d08c9d6b19/extract-utilities/0.log" Dec 09 12:09:05 crc kubenswrapper[4849]: I1209 12:09:05.703548 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fw7ws_1d0053b5-2860-49fc-98d9-a9d08c9d6b19/extract-content/0.log" Dec 09 12:09:05 crc kubenswrapper[4849]: I1209 12:09:05.955257 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fw7ws_1d0053b5-2860-49fc-98d9-a9d08c9d6b19/extract-content/0.log" Dec 09 12:09:06 crc kubenswrapper[4849]: I1209 12:09:06.052970 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fw7ws_1d0053b5-2860-49fc-98d9-a9d08c9d6b19/extract-utilities/0.log" Dec 09 12:09:06 crc kubenswrapper[4849]: I1209 12:09:06.095758 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fw7ws_1d0053b5-2860-49fc-98d9-a9d08c9d6b19/registry-server/0.log" Dec 09 12:09:06 crc kubenswrapper[4849]: I1209 12:09:06.271815 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9rq2m_591a8321-876b-43fc-a46e-9e632c31e6ad/extract-utilities/0.log" Dec 09 12:09:06 crc kubenswrapper[4849]: I1209 12:09:06.499298 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9rq2m_591a8321-876b-43fc-a46e-9e632c31e6ad/extract-content/0.log" Dec 09 12:09:06 crc kubenswrapper[4849]: I1209 12:09:06.511799 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9rq2m_591a8321-876b-43fc-a46e-9e632c31e6ad/extract-utilities/0.log" Dec 09 12:09:06 crc kubenswrapper[4849]: I1209 12:09:06.571566 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9rq2m_591a8321-876b-43fc-a46e-9e632c31e6ad/extract-content/0.log" Dec 09 12:09:06 crc kubenswrapper[4849]: I1209 12:09:06.756503 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9rq2m_591a8321-876b-43fc-a46e-9e632c31e6ad/extract-utilities/0.log" Dec 09 12:09:06 crc kubenswrapper[4849]: I1209 12:09:06.774780 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9rq2m_591a8321-876b-43fc-a46e-9e632c31e6ad/extract-content/0.log" Dec 09 12:09:07 crc kubenswrapper[4849]: I1209 12:09:07.073830 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9rq2m_591a8321-876b-43fc-a46e-9e632c31e6ad/registry-server/0.log" Dec 09 12:11:06 crc kubenswrapper[4849]: I1209 12:11:06.902936 4849 generic.go:334] "Generic (PLEG): container finished" podID="8988f826-1349-4c00-9ac1-9540bb868c89" containerID="9d296a6fb04fa63ff9a3750ab79e50e0053c2605505b1ab3be868a7ea4dcb87d" exitCode=0 Dec 09 12:11:06 crc kubenswrapper[4849]: I1209 12:11:06.903003 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-znxg4/must-gather-mbhmx" event={"ID":"8988f826-1349-4c00-9ac1-9540bb868c89","Type":"ContainerDied","Data":"9d296a6fb04fa63ff9a3750ab79e50e0053c2605505b1ab3be868a7ea4dcb87d"} Dec 09 12:11:06 crc kubenswrapper[4849]: I1209 12:11:06.904040 4849 scope.go:117] "RemoveContainer" containerID="9d296a6fb04fa63ff9a3750ab79e50e0053c2605505b1ab3be868a7ea4dcb87d" Dec 09 12:11:07 crc kubenswrapper[4849]: I1209 12:11:07.195508 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-znxg4_must-gather-mbhmx_8988f826-1349-4c00-9ac1-9540bb868c89/gather/0.log" Dec 09 12:11:15 crc kubenswrapper[4849]: I1209 12:11:15.675708 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-znxg4/must-gather-mbhmx"] Dec 09 12:11:15 crc kubenswrapper[4849]: I1209 12:11:15.676648 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-znxg4/must-gather-mbhmx" podUID="8988f826-1349-4c00-9ac1-9540bb868c89" containerName="copy" containerID="cri-o://cb6bfa29a0bc65fc7785a86886c996a89e2a92e20a3b9db2c2a57ddf12942b90" gracePeriod=2 Dec 09 12:11:15 crc kubenswrapper[4849]: I1209 12:11:15.688869 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-znxg4/must-gather-mbhmx"] Dec 09 12:11:15 crc kubenswrapper[4849]: I1209 12:11:15.983387 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-znxg4_must-gather-mbhmx_8988f826-1349-4c00-9ac1-9540bb868c89/copy/0.log" Dec 09 12:11:15 crc kubenswrapper[4849]: I1209 12:11:15.984544 4849 generic.go:334] "Generic (PLEG): container finished" podID="8988f826-1349-4c00-9ac1-9540bb868c89" containerID="cb6bfa29a0bc65fc7785a86886c996a89e2a92e20a3b9db2c2a57ddf12942b90" exitCode=143 Dec 09 12:11:16 crc kubenswrapper[4849]: I1209 12:11:16.114947 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-znxg4_must-gather-mbhmx_8988f826-1349-4c00-9ac1-9540bb868c89/copy/0.log" Dec 09 12:11:16 crc kubenswrapper[4849]: I1209 12:11:16.115552 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-znxg4/must-gather-mbhmx" Dec 09 12:11:16 crc kubenswrapper[4849]: I1209 12:11:16.236897 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8988f826-1349-4c00-9ac1-9540bb868c89-must-gather-output\") pod \"8988f826-1349-4c00-9ac1-9540bb868c89\" (UID: \"8988f826-1349-4c00-9ac1-9540bb868c89\") " Dec 09 12:11:16 crc kubenswrapper[4849]: I1209 12:11:16.237636 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbpp8\" (UniqueName: \"kubernetes.io/projected/8988f826-1349-4c00-9ac1-9540bb868c89-kube-api-access-kbpp8\") pod \"8988f826-1349-4c00-9ac1-9540bb868c89\" (UID: \"8988f826-1349-4c00-9ac1-9540bb868c89\") " Dec 09 12:11:16 crc kubenswrapper[4849]: I1209 12:11:16.256685 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8988f826-1349-4c00-9ac1-9540bb868c89-kube-api-access-kbpp8" (OuterVolumeSpecName: "kube-api-access-kbpp8") pod "8988f826-1349-4c00-9ac1-9540bb868c89" (UID: "8988f826-1349-4c00-9ac1-9540bb868c89"). InnerVolumeSpecName "kube-api-access-kbpp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:16 crc kubenswrapper[4849]: I1209 12:11:16.340367 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbpp8\" (UniqueName: \"kubernetes.io/projected/8988f826-1349-4c00-9ac1-9540bb868c89-kube-api-access-kbpp8\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:16 crc kubenswrapper[4849]: I1209 12:11:16.416722 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8988f826-1349-4c00-9ac1-9540bb868c89-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8988f826-1349-4c00-9ac1-9540bb868c89" (UID: "8988f826-1349-4c00-9ac1-9540bb868c89"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:16 crc kubenswrapper[4849]: I1209 12:11:16.442648 4849 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8988f826-1349-4c00-9ac1-9540bb868c89-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:16 crc kubenswrapper[4849]: I1209 12:11:16.547182 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8988f826-1349-4c00-9ac1-9540bb868c89" path="/var/lib/kubelet/pods/8988f826-1349-4c00-9ac1-9540bb868c89/volumes" Dec 09 12:11:16 crc kubenswrapper[4849]: I1209 12:11:16.996087 4849 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-znxg4_must-gather-mbhmx_8988f826-1349-4c00-9ac1-9540bb868c89/copy/0.log" Dec 09 12:11:16 crc kubenswrapper[4849]: I1209 12:11:16.996659 4849 scope.go:117] "RemoveContainer" containerID="cb6bfa29a0bc65fc7785a86886c996a89e2a92e20a3b9db2c2a57ddf12942b90" Dec 09 12:11:16 crc kubenswrapper[4849]: I1209 12:11:16.996749 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-znxg4/must-gather-mbhmx" Dec 09 12:11:17 crc kubenswrapper[4849]: I1209 12:11:17.019829 4849 scope.go:117] "RemoveContainer" containerID="9d296a6fb04fa63ff9a3750ab79e50e0053c2605505b1ab3be868a7ea4dcb87d" Dec 09 12:11:21 crc kubenswrapper[4849]: I1209 12:11:21.132330 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:11:21 crc kubenswrapper[4849]: I1209 12:11:21.132924 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:11:51 crc kubenswrapper[4849]: I1209 12:11:51.132835 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:11:51 crc kubenswrapper[4849]: I1209 12:11:51.133481 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:12:21 crc kubenswrapper[4849]: I1209 12:12:21.132931 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:12:21 crc kubenswrapper[4849]: I1209 12:12:21.133488 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:12:21 crc kubenswrapper[4849]: I1209 12:12:21.133550 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 12:12:21 crc kubenswrapper[4849]: I1209 12:12:21.134363 4849 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66280f7ade75804b6c0096c7c66616b5d7da643ae6d9d19df6728655528ef876"} pod="openshift-machine-config-operator/machine-config-daemon-89kpx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:12:21 crc kubenswrapper[4849]: I1209 12:12:21.134484 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" containerID="cri-o://66280f7ade75804b6c0096c7c66616b5d7da643ae6d9d19df6728655528ef876" gracePeriod=600 Dec 09 12:12:21 crc kubenswrapper[4849]: I1209 12:12:21.604662 4849 generic.go:334] "Generic (PLEG): container finished" podID="157c6f6c-042b-4da3-934e-a08474e56486" containerID="66280f7ade75804b6c0096c7c66616b5d7da643ae6d9d19df6728655528ef876" exitCode=0 Dec 09 12:12:21 crc kubenswrapper[4849]: I1209 12:12:21.605123 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerDied","Data":"66280f7ade75804b6c0096c7c66616b5d7da643ae6d9d19df6728655528ef876"} Dec 09 12:12:21 crc kubenswrapper[4849]: I1209 12:12:21.605209 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerStarted","Data":"2b4cd4ee0fd6b8ad7eb5e2d06bb15a3a9d7257eca421494ffea9365ea5218f56"} Dec 09 12:12:21 crc kubenswrapper[4849]: I1209 12:12:21.605296 4849 scope.go:117] "RemoveContainer" containerID="264aaa891b4cb803ce16164250f1309b6d2f3032ea83792d9edc13f16f24b209" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.055020 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6ghtw"] Dec 09 12:13:28 crc kubenswrapper[4849]: E1209 12:13:28.056665 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="896c6168-6db7-44c5-87ca-53b2914e7fa0" containerName="extract-utilities" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.056697 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="896c6168-6db7-44c5-87ca-53b2914e7fa0" containerName="extract-utilities" Dec 09 12:13:28 crc kubenswrapper[4849]: E1209 12:13:28.056727 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8988f826-1349-4c00-9ac1-9540bb868c89" containerName="copy" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.056739 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="8988f826-1349-4c00-9ac1-9540bb868c89" containerName="copy" Dec 09 12:13:28 crc kubenswrapper[4849]: E1209 12:13:28.056781 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8988f826-1349-4c00-9ac1-9540bb868c89" containerName="gather" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.056792 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="8988f826-1349-4c00-9ac1-9540bb868c89" containerName="gather" Dec 09 12:13:28 crc kubenswrapper[4849]: E1209 12:13:28.056836 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="896c6168-6db7-44c5-87ca-53b2914e7fa0" containerName="extract-content" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.056848 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="896c6168-6db7-44c5-87ca-53b2914e7fa0" containerName="extract-content" Dec 09 12:13:28 crc kubenswrapper[4849]: E1209 12:13:28.056889 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="896c6168-6db7-44c5-87ca-53b2914e7fa0" containerName="registry-server" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.056901 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="896c6168-6db7-44c5-87ca-53b2914e7fa0" containerName="registry-server" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.057615 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="8988f826-1349-4c00-9ac1-9540bb868c89" containerName="gather" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.057660 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="8988f826-1349-4c00-9ac1-9540bb868c89" containerName="copy" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.057722 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="896c6168-6db7-44c5-87ca-53b2914e7fa0" containerName="registry-server" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.108762 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.111667 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6ghtw"] Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.260403 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b09794-8bd4-4179-9fb8-31c295e10cc8-utilities\") pod \"certified-operators-6ghtw\" (UID: \"67b09794-8bd4-4179-9fb8-31c295e10cc8\") " pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.260824 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b09794-8bd4-4179-9fb8-31c295e10cc8-catalog-content\") pod \"certified-operators-6ghtw\" (UID: \"67b09794-8bd4-4179-9fb8-31c295e10cc8\") " pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.260935 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx5pk\" (UniqueName: \"kubernetes.io/projected/67b09794-8bd4-4179-9fb8-31c295e10cc8-kube-api-access-mx5pk\") pod \"certified-operators-6ghtw\" (UID: \"67b09794-8bd4-4179-9fb8-31c295e10cc8\") " pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.362572 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx5pk\" (UniqueName: \"kubernetes.io/projected/67b09794-8bd4-4179-9fb8-31c295e10cc8-kube-api-access-mx5pk\") pod \"certified-operators-6ghtw\" (UID: \"67b09794-8bd4-4179-9fb8-31c295e10cc8\") " pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.363008 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b09794-8bd4-4179-9fb8-31c295e10cc8-utilities\") pod \"certified-operators-6ghtw\" (UID: \"67b09794-8bd4-4179-9fb8-31c295e10cc8\") " pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.363129 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b09794-8bd4-4179-9fb8-31c295e10cc8-catalog-content\") pod \"certified-operators-6ghtw\" (UID: \"67b09794-8bd4-4179-9fb8-31c295e10cc8\") " pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.363595 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b09794-8bd4-4179-9fb8-31c295e10cc8-utilities\") pod \"certified-operators-6ghtw\" (UID: \"67b09794-8bd4-4179-9fb8-31c295e10cc8\") " pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.363622 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b09794-8bd4-4179-9fb8-31c295e10cc8-catalog-content\") pod \"certified-operators-6ghtw\" (UID: \"67b09794-8bd4-4179-9fb8-31c295e10cc8\") " pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.386876 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx5pk\" (UniqueName: \"kubernetes.io/projected/67b09794-8bd4-4179-9fb8-31c295e10cc8-kube-api-access-mx5pk\") pod \"certified-operators-6ghtw\" (UID: \"67b09794-8bd4-4179-9fb8-31c295e10cc8\") " pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.438053 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:28 crc kubenswrapper[4849]: I1209 12:13:28.984609 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6ghtw"] Dec 09 12:13:29 crc kubenswrapper[4849]: I1209 12:13:29.326856 4849 generic.go:334] "Generic (PLEG): container finished" podID="67b09794-8bd4-4179-9fb8-31c295e10cc8" containerID="48bbfd057ea38bc6e3f5d06b7b7e0fd4c4ff6881422233ab518d466a8050a729" exitCode=0 Dec 09 12:13:29 crc kubenswrapper[4849]: I1209 12:13:29.326960 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ghtw" event={"ID":"67b09794-8bd4-4179-9fb8-31c295e10cc8","Type":"ContainerDied","Data":"48bbfd057ea38bc6e3f5d06b7b7e0fd4c4ff6881422233ab518d466a8050a729"} Dec 09 12:13:29 crc kubenswrapper[4849]: I1209 12:13:29.327101 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ghtw" event={"ID":"67b09794-8bd4-4179-9fb8-31c295e10cc8","Type":"ContainerStarted","Data":"ef169d9427d2ec36787a01b1e253e35d2c0fc40d430b0cb8ca1a713c742c2755"} Dec 09 12:13:29 crc kubenswrapper[4849]: I1209 12:13:29.328695 4849 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:13:31 crc kubenswrapper[4849]: I1209 12:13:31.017700 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4qsgp"] Dec 09 12:13:31 crc kubenswrapper[4849]: I1209 12:13:31.020003 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:31 crc kubenswrapper[4849]: I1209 12:13:31.046712 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qsgp"] Dec 09 12:13:31 crc kubenswrapper[4849]: I1209 12:13:31.217211 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42655010-277e-4f81-826a-a2ea23d46571-utilities\") pod \"redhat-operators-4qsgp\" (UID: \"42655010-277e-4f81-826a-a2ea23d46571\") " pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:31 crc kubenswrapper[4849]: I1209 12:13:31.217336 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdh2l\" (UniqueName: \"kubernetes.io/projected/42655010-277e-4f81-826a-a2ea23d46571-kube-api-access-fdh2l\") pod \"redhat-operators-4qsgp\" (UID: \"42655010-277e-4f81-826a-a2ea23d46571\") " pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:31 crc kubenswrapper[4849]: I1209 12:13:31.217386 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42655010-277e-4f81-826a-a2ea23d46571-catalog-content\") pod \"redhat-operators-4qsgp\" (UID: \"42655010-277e-4f81-826a-a2ea23d46571\") " pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:31 crc kubenswrapper[4849]: I1209 12:13:31.319520 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdh2l\" (UniqueName: \"kubernetes.io/projected/42655010-277e-4f81-826a-a2ea23d46571-kube-api-access-fdh2l\") pod \"redhat-operators-4qsgp\" (UID: \"42655010-277e-4f81-826a-a2ea23d46571\") " pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:31 crc kubenswrapper[4849]: I1209 12:13:31.319614 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42655010-277e-4f81-826a-a2ea23d46571-catalog-content\") pod \"redhat-operators-4qsgp\" (UID: \"42655010-277e-4f81-826a-a2ea23d46571\") " pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:31 crc kubenswrapper[4849]: I1209 12:13:31.319714 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42655010-277e-4f81-826a-a2ea23d46571-utilities\") pod \"redhat-operators-4qsgp\" (UID: \"42655010-277e-4f81-826a-a2ea23d46571\") " pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:31 crc kubenswrapper[4849]: I1209 12:13:31.320169 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42655010-277e-4f81-826a-a2ea23d46571-catalog-content\") pod \"redhat-operators-4qsgp\" (UID: \"42655010-277e-4f81-826a-a2ea23d46571\") " pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:31 crc kubenswrapper[4849]: I1209 12:13:31.320246 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42655010-277e-4f81-826a-a2ea23d46571-utilities\") pod \"redhat-operators-4qsgp\" (UID: \"42655010-277e-4f81-826a-a2ea23d46571\") " pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:31 crc kubenswrapper[4849]: I1209 12:13:31.349542 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdh2l\" (UniqueName: \"kubernetes.io/projected/42655010-277e-4f81-826a-a2ea23d46571-kube-api-access-fdh2l\") pod \"redhat-operators-4qsgp\" (UID: \"42655010-277e-4f81-826a-a2ea23d46571\") " pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:31 crc kubenswrapper[4849]: I1209 12:13:31.351296 4849 generic.go:334] "Generic (PLEG): container finished" podID="67b09794-8bd4-4179-9fb8-31c295e10cc8" containerID="cd04f0bbdc00aa804eae8e6a1fd5a25365978034edbb17315cead65401b87f0b" exitCode=0 Dec 09 12:13:31 crc kubenswrapper[4849]: I1209 12:13:31.351669 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ghtw" event={"ID":"67b09794-8bd4-4179-9fb8-31c295e10cc8","Type":"ContainerDied","Data":"cd04f0bbdc00aa804eae8e6a1fd5a25365978034edbb17315cead65401b87f0b"} Dec 09 12:13:31 crc kubenswrapper[4849]: I1209 12:13:31.641841 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:32 crc kubenswrapper[4849]: I1209 12:13:32.115687 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qsgp"] Dec 09 12:13:32 crc kubenswrapper[4849]: I1209 12:13:32.363766 4849 generic.go:334] "Generic (PLEG): container finished" podID="42655010-277e-4f81-826a-a2ea23d46571" containerID="ecd531c5888c338f0c7c3737cde83fb04dc6af68104ea055164522048e2a4b0b" exitCode=0 Dec 09 12:13:32 crc kubenswrapper[4849]: I1209 12:13:32.364268 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qsgp" event={"ID":"42655010-277e-4f81-826a-a2ea23d46571","Type":"ContainerDied","Data":"ecd531c5888c338f0c7c3737cde83fb04dc6af68104ea055164522048e2a4b0b"} Dec 09 12:13:32 crc kubenswrapper[4849]: I1209 12:13:32.364402 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qsgp" event={"ID":"42655010-277e-4f81-826a-a2ea23d46571","Type":"ContainerStarted","Data":"5ea2fe074de581ca00d7fd6a100224b6c3f1f4d8c8f1441c7d7b0dfedecdbe1b"} Dec 09 12:13:32 crc kubenswrapper[4849]: I1209 12:13:32.373283 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ghtw" event={"ID":"67b09794-8bd4-4179-9fb8-31c295e10cc8","Type":"ContainerStarted","Data":"daca9739c6a7fe8cc4d637e4ea15b0a93f434009ffa66b783db3059cda3c4074"} Dec 09 12:13:32 crc kubenswrapper[4849]: I1209 12:13:32.420967 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6ghtw" podStartSLOduration=1.767211565 podStartE2EDuration="4.420947462s" podCreationTimestamp="2025-12-09 12:13:28 +0000 UTC" firstStartedPulling="2025-12-09 12:13:29.328357378 +0000 UTC m=+2791.868241684" lastFinishedPulling="2025-12-09 12:13:31.982093265 +0000 UTC m=+2794.521977581" observedRunningTime="2025-12-09 12:13:32.408543855 +0000 UTC m=+2794.948428171" watchObservedRunningTime="2025-12-09 12:13:32.420947462 +0000 UTC m=+2794.960831778" Dec 09 12:13:33 crc kubenswrapper[4849]: I1209 12:13:33.384926 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qsgp" event={"ID":"42655010-277e-4f81-826a-a2ea23d46571","Type":"ContainerStarted","Data":"3132363ba69b79f93df19add47c0c3f45eaa64b0b00255b06672571ee5f506a8"} Dec 09 12:13:36 crc kubenswrapper[4849]: I1209 12:13:36.410202 4849 generic.go:334] "Generic (PLEG): container finished" podID="42655010-277e-4f81-826a-a2ea23d46571" containerID="3132363ba69b79f93df19add47c0c3f45eaa64b0b00255b06672571ee5f506a8" exitCode=0 Dec 09 12:13:36 crc kubenswrapper[4849]: I1209 12:13:36.410724 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qsgp" event={"ID":"42655010-277e-4f81-826a-a2ea23d46571","Type":"ContainerDied","Data":"3132363ba69b79f93df19add47c0c3f45eaa64b0b00255b06672571ee5f506a8"} Dec 09 12:13:38 crc kubenswrapper[4849]: I1209 12:13:38.427882 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qsgp" event={"ID":"42655010-277e-4f81-826a-a2ea23d46571","Type":"ContainerStarted","Data":"a938fdb7afa964e52660ec9cb2086cb1515f199cdd252424269094ad19f05de5"} Dec 09 12:13:38 crc kubenswrapper[4849]: I1209 12:13:38.439118 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:38 crc kubenswrapper[4849]: I1209 12:13:38.439183 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:38 crc kubenswrapper[4849]: I1209 12:13:38.458055 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4qsgp" podStartSLOduration=3.451065137 podStartE2EDuration="8.458035081s" podCreationTimestamp="2025-12-09 12:13:30 +0000 UTC" firstStartedPulling="2025-12-09 12:13:32.366296169 +0000 UTC m=+2794.906180485" lastFinishedPulling="2025-12-09 12:13:37.373266113 +0000 UTC m=+2799.913150429" observedRunningTime="2025-12-09 12:13:38.44547232 +0000 UTC m=+2800.985356656" watchObservedRunningTime="2025-12-09 12:13:38.458035081 +0000 UTC m=+2800.997919387" Dec 09 12:13:38 crc kubenswrapper[4849]: I1209 12:13:38.499879 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:39 crc kubenswrapper[4849]: I1209 12:13:39.492886 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:40 crc kubenswrapper[4849]: I1209 12:13:40.596862 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6ghtw"] Dec 09 12:13:41 crc kubenswrapper[4849]: I1209 12:13:41.450925 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6ghtw" podUID="67b09794-8bd4-4179-9fb8-31c295e10cc8" containerName="registry-server" containerID="cri-o://daca9739c6a7fe8cc4d637e4ea15b0a93f434009ffa66b783db3059cda3c4074" gracePeriod=2 Dec 09 12:13:41 crc kubenswrapper[4849]: I1209 12:13:41.642368 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:41 crc kubenswrapper[4849]: I1209 12:13:41.642770 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:42 crc kubenswrapper[4849]: I1209 12:13:42.690122 4849 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4qsgp" podUID="42655010-277e-4f81-826a-a2ea23d46571" containerName="registry-server" probeResult="failure" output=< Dec 09 12:13:42 crc kubenswrapper[4849]: timeout: failed to connect service ":50051" within 1s Dec 09 12:13:42 crc kubenswrapper[4849]: > Dec 09 12:13:42 crc kubenswrapper[4849]: I1209 12:13:42.986061 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.158479 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b09794-8bd4-4179-9fb8-31c295e10cc8-utilities\") pod \"67b09794-8bd4-4179-9fb8-31c295e10cc8\" (UID: \"67b09794-8bd4-4179-9fb8-31c295e10cc8\") " Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.158596 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b09794-8bd4-4179-9fb8-31c295e10cc8-catalog-content\") pod \"67b09794-8bd4-4179-9fb8-31c295e10cc8\" (UID: \"67b09794-8bd4-4179-9fb8-31c295e10cc8\") " Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.158726 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx5pk\" (UniqueName: \"kubernetes.io/projected/67b09794-8bd4-4179-9fb8-31c295e10cc8-kube-api-access-mx5pk\") pod \"67b09794-8bd4-4179-9fb8-31c295e10cc8\" (UID: \"67b09794-8bd4-4179-9fb8-31c295e10cc8\") " Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.159248 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67b09794-8bd4-4179-9fb8-31c295e10cc8-utilities" (OuterVolumeSpecName: "utilities") pod "67b09794-8bd4-4179-9fb8-31c295e10cc8" (UID: "67b09794-8bd4-4179-9fb8-31c295e10cc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.173037 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67b09794-8bd4-4179-9fb8-31c295e10cc8-kube-api-access-mx5pk" (OuterVolumeSpecName: "kube-api-access-mx5pk") pod "67b09794-8bd4-4179-9fb8-31c295e10cc8" (UID: "67b09794-8bd4-4179-9fb8-31c295e10cc8"). InnerVolumeSpecName "kube-api-access-mx5pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.195086 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67b09794-8bd4-4179-9fb8-31c295e10cc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67b09794-8bd4-4179-9fb8-31c295e10cc8" (UID: "67b09794-8bd4-4179-9fb8-31c295e10cc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.261472 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b09794-8bd4-4179-9fb8-31c295e10cc8-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.261523 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b09794-8bd4-4179-9fb8-31c295e10cc8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.261535 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx5pk\" (UniqueName: \"kubernetes.io/projected/67b09794-8bd4-4179-9fb8-31c295e10cc8-kube-api-access-mx5pk\") on node \"crc\" DevicePath \"\"" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.473818 4849 generic.go:334] "Generic (PLEG): container finished" podID="67b09794-8bd4-4179-9fb8-31c295e10cc8" containerID="daca9739c6a7fe8cc4d637e4ea15b0a93f434009ffa66b783db3059cda3c4074" exitCode=0 Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.473948 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6ghtw" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.473953 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ghtw" event={"ID":"67b09794-8bd4-4179-9fb8-31c295e10cc8","Type":"ContainerDied","Data":"daca9739c6a7fe8cc4d637e4ea15b0a93f434009ffa66b783db3059cda3c4074"} Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.474661 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ghtw" event={"ID":"67b09794-8bd4-4179-9fb8-31c295e10cc8","Type":"ContainerDied","Data":"ef169d9427d2ec36787a01b1e253e35d2c0fc40d430b0cb8ca1a713c742c2755"} Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.474681 4849 scope.go:117] "RemoveContainer" containerID="daca9739c6a7fe8cc4d637e4ea15b0a93f434009ffa66b783db3059cda3c4074" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.512029 4849 scope.go:117] "RemoveContainer" containerID="cd04f0bbdc00aa804eae8e6a1fd5a25365978034edbb17315cead65401b87f0b" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.529078 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6ghtw"] Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.539759 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6ghtw"] Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.900964 4849 scope.go:117] "RemoveContainer" containerID="48bbfd057ea38bc6e3f5d06b7b7e0fd4c4ff6881422233ab518d466a8050a729" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.934913 4849 scope.go:117] "RemoveContainer" containerID="daca9739c6a7fe8cc4d637e4ea15b0a93f434009ffa66b783db3059cda3c4074" Dec 09 12:13:43 crc kubenswrapper[4849]: E1209 12:13:43.935347 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daca9739c6a7fe8cc4d637e4ea15b0a93f434009ffa66b783db3059cda3c4074\": container with ID starting with daca9739c6a7fe8cc4d637e4ea15b0a93f434009ffa66b783db3059cda3c4074 not found: ID does not exist" containerID="daca9739c6a7fe8cc4d637e4ea15b0a93f434009ffa66b783db3059cda3c4074" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.935376 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daca9739c6a7fe8cc4d637e4ea15b0a93f434009ffa66b783db3059cda3c4074"} err="failed to get container status \"daca9739c6a7fe8cc4d637e4ea15b0a93f434009ffa66b783db3059cda3c4074\": rpc error: code = NotFound desc = could not find container \"daca9739c6a7fe8cc4d637e4ea15b0a93f434009ffa66b783db3059cda3c4074\": container with ID starting with daca9739c6a7fe8cc4d637e4ea15b0a93f434009ffa66b783db3059cda3c4074 not found: ID does not exist" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.935398 4849 scope.go:117] "RemoveContainer" containerID="cd04f0bbdc00aa804eae8e6a1fd5a25365978034edbb17315cead65401b87f0b" Dec 09 12:13:43 crc kubenswrapper[4849]: E1209 12:13:43.935632 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd04f0bbdc00aa804eae8e6a1fd5a25365978034edbb17315cead65401b87f0b\": container with ID starting with cd04f0bbdc00aa804eae8e6a1fd5a25365978034edbb17315cead65401b87f0b not found: ID does not exist" containerID="cd04f0bbdc00aa804eae8e6a1fd5a25365978034edbb17315cead65401b87f0b" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.935656 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd04f0bbdc00aa804eae8e6a1fd5a25365978034edbb17315cead65401b87f0b"} err="failed to get container status \"cd04f0bbdc00aa804eae8e6a1fd5a25365978034edbb17315cead65401b87f0b\": rpc error: code = NotFound desc = could not find container \"cd04f0bbdc00aa804eae8e6a1fd5a25365978034edbb17315cead65401b87f0b\": container with ID starting with cd04f0bbdc00aa804eae8e6a1fd5a25365978034edbb17315cead65401b87f0b not found: ID does not exist" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.935675 4849 scope.go:117] "RemoveContainer" containerID="48bbfd057ea38bc6e3f5d06b7b7e0fd4c4ff6881422233ab518d466a8050a729" Dec 09 12:13:43 crc kubenswrapper[4849]: E1209 12:13:43.936286 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48bbfd057ea38bc6e3f5d06b7b7e0fd4c4ff6881422233ab518d466a8050a729\": container with ID starting with 48bbfd057ea38bc6e3f5d06b7b7e0fd4c4ff6881422233ab518d466a8050a729 not found: ID does not exist" containerID="48bbfd057ea38bc6e3f5d06b7b7e0fd4c4ff6881422233ab518d466a8050a729" Dec 09 12:13:43 crc kubenswrapper[4849]: I1209 12:13:43.936508 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48bbfd057ea38bc6e3f5d06b7b7e0fd4c4ff6881422233ab518d466a8050a729"} err="failed to get container status \"48bbfd057ea38bc6e3f5d06b7b7e0fd4c4ff6881422233ab518d466a8050a729\": rpc error: code = NotFound desc = could not find container \"48bbfd057ea38bc6e3f5d06b7b7e0fd4c4ff6881422233ab518d466a8050a729\": container with ID starting with 48bbfd057ea38bc6e3f5d06b7b7e0fd4c4ff6881422233ab518d466a8050a729 not found: ID does not exist" Dec 09 12:13:44 crc kubenswrapper[4849]: I1209 12:13:44.549284 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67b09794-8bd4-4179-9fb8-31c295e10cc8" path="/var/lib/kubelet/pods/67b09794-8bd4-4179-9fb8-31c295e10cc8/volumes" Dec 09 12:13:51 crc kubenswrapper[4849]: I1209 12:13:51.689460 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:51 crc kubenswrapper[4849]: I1209 12:13:51.741374 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:51 crc kubenswrapper[4849]: I1209 12:13:51.927631 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qsgp"] Dec 09 12:13:53 crc kubenswrapper[4849]: I1209 12:13:53.563844 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4qsgp" podUID="42655010-277e-4f81-826a-a2ea23d46571" containerName="registry-server" containerID="cri-o://a938fdb7afa964e52660ec9cb2086cb1515f199cdd252424269094ad19f05de5" gracePeriod=2 Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.511920 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.579046 4849 generic.go:334] "Generic (PLEG): container finished" podID="42655010-277e-4f81-826a-a2ea23d46571" containerID="a938fdb7afa964e52660ec9cb2086cb1515f199cdd252424269094ad19f05de5" exitCode=0 Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.579092 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qsgp" event={"ID":"42655010-277e-4f81-826a-a2ea23d46571","Type":"ContainerDied","Data":"a938fdb7afa964e52660ec9cb2086cb1515f199cdd252424269094ad19f05de5"} Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.579126 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qsgp" event={"ID":"42655010-277e-4f81-826a-a2ea23d46571","Type":"ContainerDied","Data":"5ea2fe074de581ca00d7fd6a100224b6c3f1f4d8c8f1441c7d7b0dfedecdbe1b"} Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.579126 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qsgp" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.579144 4849 scope.go:117] "RemoveContainer" containerID="a938fdb7afa964e52660ec9cb2086cb1515f199cdd252424269094ad19f05de5" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.600805 4849 scope.go:117] "RemoveContainer" containerID="3132363ba69b79f93df19add47c0c3f45eaa64b0b00255b06672571ee5f506a8" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.623226 4849 scope.go:117] "RemoveContainer" containerID="ecd531c5888c338f0c7c3737cde83fb04dc6af68104ea055164522048e2a4b0b" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.662691 4849 scope.go:117] "RemoveContainer" containerID="a938fdb7afa964e52660ec9cb2086cb1515f199cdd252424269094ad19f05de5" Dec 09 12:13:54 crc kubenswrapper[4849]: E1209 12:13:54.663077 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a938fdb7afa964e52660ec9cb2086cb1515f199cdd252424269094ad19f05de5\": container with ID starting with a938fdb7afa964e52660ec9cb2086cb1515f199cdd252424269094ad19f05de5 not found: ID does not exist" containerID="a938fdb7afa964e52660ec9cb2086cb1515f199cdd252424269094ad19f05de5" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.663115 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a938fdb7afa964e52660ec9cb2086cb1515f199cdd252424269094ad19f05de5"} err="failed to get container status \"a938fdb7afa964e52660ec9cb2086cb1515f199cdd252424269094ad19f05de5\": rpc error: code = NotFound desc = could not find container \"a938fdb7afa964e52660ec9cb2086cb1515f199cdd252424269094ad19f05de5\": container with ID starting with a938fdb7afa964e52660ec9cb2086cb1515f199cdd252424269094ad19f05de5 not found: ID does not exist" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.663144 4849 scope.go:117] "RemoveContainer" containerID="3132363ba69b79f93df19add47c0c3f45eaa64b0b00255b06672571ee5f506a8" Dec 09 12:13:54 crc kubenswrapper[4849]: E1209 12:13:54.663540 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3132363ba69b79f93df19add47c0c3f45eaa64b0b00255b06672571ee5f506a8\": container with ID starting with 3132363ba69b79f93df19add47c0c3f45eaa64b0b00255b06672571ee5f506a8 not found: ID does not exist" containerID="3132363ba69b79f93df19add47c0c3f45eaa64b0b00255b06672571ee5f506a8" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.663566 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3132363ba69b79f93df19add47c0c3f45eaa64b0b00255b06672571ee5f506a8"} err="failed to get container status \"3132363ba69b79f93df19add47c0c3f45eaa64b0b00255b06672571ee5f506a8\": rpc error: code = NotFound desc = could not find container \"3132363ba69b79f93df19add47c0c3f45eaa64b0b00255b06672571ee5f506a8\": container with ID starting with 3132363ba69b79f93df19add47c0c3f45eaa64b0b00255b06672571ee5f506a8 not found: ID does not exist" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.663586 4849 scope.go:117] "RemoveContainer" containerID="ecd531c5888c338f0c7c3737cde83fb04dc6af68104ea055164522048e2a4b0b" Dec 09 12:13:54 crc kubenswrapper[4849]: E1209 12:13:54.663891 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecd531c5888c338f0c7c3737cde83fb04dc6af68104ea055164522048e2a4b0b\": container with ID starting with ecd531c5888c338f0c7c3737cde83fb04dc6af68104ea055164522048e2a4b0b not found: ID does not exist" containerID="ecd531c5888c338f0c7c3737cde83fb04dc6af68104ea055164522048e2a4b0b" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.663914 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecd531c5888c338f0c7c3737cde83fb04dc6af68104ea055164522048e2a4b0b"} err="failed to get container status \"ecd531c5888c338f0c7c3737cde83fb04dc6af68104ea055164522048e2a4b0b\": rpc error: code = NotFound desc = could not find container \"ecd531c5888c338f0c7c3737cde83fb04dc6af68104ea055164522048e2a4b0b\": container with ID starting with ecd531c5888c338f0c7c3737cde83fb04dc6af68104ea055164522048e2a4b0b not found: ID does not exist" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.689072 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42655010-277e-4f81-826a-a2ea23d46571-utilities\") pod \"42655010-277e-4f81-826a-a2ea23d46571\" (UID: \"42655010-277e-4f81-826a-a2ea23d46571\") " Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.689293 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42655010-277e-4f81-826a-a2ea23d46571-catalog-content\") pod \"42655010-277e-4f81-826a-a2ea23d46571\" (UID: \"42655010-277e-4f81-826a-a2ea23d46571\") " Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.689362 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdh2l\" (UniqueName: \"kubernetes.io/projected/42655010-277e-4f81-826a-a2ea23d46571-kube-api-access-fdh2l\") pod \"42655010-277e-4f81-826a-a2ea23d46571\" (UID: \"42655010-277e-4f81-826a-a2ea23d46571\") " Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.690064 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42655010-277e-4f81-826a-a2ea23d46571-utilities" (OuterVolumeSpecName: "utilities") pod "42655010-277e-4f81-826a-a2ea23d46571" (UID: "42655010-277e-4f81-826a-a2ea23d46571"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.691489 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42655010-277e-4f81-826a-a2ea23d46571-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.694130 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42655010-277e-4f81-826a-a2ea23d46571-kube-api-access-fdh2l" (OuterVolumeSpecName: "kube-api-access-fdh2l") pod "42655010-277e-4f81-826a-a2ea23d46571" (UID: "42655010-277e-4f81-826a-a2ea23d46571"). InnerVolumeSpecName "kube-api-access-fdh2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.793336 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdh2l\" (UniqueName: \"kubernetes.io/projected/42655010-277e-4f81-826a-a2ea23d46571-kube-api-access-fdh2l\") on node \"crc\" DevicePath \"\"" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.797470 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42655010-277e-4f81-826a-a2ea23d46571-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42655010-277e-4f81-826a-a2ea23d46571" (UID: "42655010-277e-4f81-826a-a2ea23d46571"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.896000 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42655010-277e-4f81-826a-a2ea23d46571-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.919365 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qsgp"] Dec 09 12:13:54 crc kubenswrapper[4849]: I1209 12:13:54.928010 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4qsgp"] Dec 09 12:13:56 crc kubenswrapper[4849]: I1209 12:13:56.547287 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42655010-277e-4f81-826a-a2ea23d46571" path="/var/lib/kubelet/pods/42655010-277e-4f81-826a-a2ea23d46571/volumes" Dec 09 12:14:21 crc kubenswrapper[4849]: I1209 12:14:21.132838 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:14:21 crc kubenswrapper[4849]: I1209 12:14:21.133878 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.192059 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9c6fx"] Dec 09 12:14:38 crc kubenswrapper[4849]: E1209 12:14:38.193005 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67b09794-8bd4-4179-9fb8-31c295e10cc8" containerName="extract-content" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.193025 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b09794-8bd4-4179-9fb8-31c295e10cc8" containerName="extract-content" Dec 09 12:14:38 crc kubenswrapper[4849]: E1209 12:14:38.193034 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67b09794-8bd4-4179-9fb8-31c295e10cc8" containerName="extract-utilities" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.193040 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b09794-8bd4-4179-9fb8-31c295e10cc8" containerName="extract-utilities" Dec 09 12:14:38 crc kubenswrapper[4849]: E1209 12:14:38.193052 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42655010-277e-4f81-826a-a2ea23d46571" containerName="extract-utilities" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.193058 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="42655010-277e-4f81-826a-a2ea23d46571" containerName="extract-utilities" Dec 09 12:14:38 crc kubenswrapper[4849]: E1209 12:14:38.193068 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42655010-277e-4f81-826a-a2ea23d46571" containerName="registry-server" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.193073 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="42655010-277e-4f81-826a-a2ea23d46571" containerName="registry-server" Dec 09 12:14:38 crc kubenswrapper[4849]: E1209 12:14:38.193095 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42655010-277e-4f81-826a-a2ea23d46571" containerName="extract-content" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.193104 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="42655010-277e-4f81-826a-a2ea23d46571" containerName="extract-content" Dec 09 12:14:38 crc kubenswrapper[4849]: E1209 12:14:38.193118 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67b09794-8bd4-4179-9fb8-31c295e10cc8" containerName="registry-server" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.193125 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b09794-8bd4-4179-9fb8-31c295e10cc8" containerName="registry-server" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.193313 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="42655010-277e-4f81-826a-a2ea23d46571" containerName="registry-server" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.193339 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="67b09794-8bd4-4179-9fb8-31c295e10cc8" containerName="registry-server" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.194985 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.215490 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a1d7c17-72be-4c3a-9080-9a07033c93c7-utilities\") pod \"community-operators-9c6fx\" (UID: \"7a1d7c17-72be-4c3a-9080-9a07033c93c7\") " pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.215758 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a1d7c17-72be-4c3a-9080-9a07033c93c7-catalog-content\") pod \"community-operators-9c6fx\" (UID: \"7a1d7c17-72be-4c3a-9080-9a07033c93c7\") " pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.215955 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwdhl\" (UniqueName: \"kubernetes.io/projected/7a1d7c17-72be-4c3a-9080-9a07033c93c7-kube-api-access-vwdhl\") pod \"community-operators-9c6fx\" (UID: \"7a1d7c17-72be-4c3a-9080-9a07033c93c7\") " pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.223176 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9c6fx"] Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.317702 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a1d7c17-72be-4c3a-9080-9a07033c93c7-utilities\") pod \"community-operators-9c6fx\" (UID: \"7a1d7c17-72be-4c3a-9080-9a07033c93c7\") " pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.317759 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a1d7c17-72be-4c3a-9080-9a07033c93c7-catalog-content\") pod \"community-operators-9c6fx\" (UID: \"7a1d7c17-72be-4c3a-9080-9a07033c93c7\") " pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.317850 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdhl\" (UniqueName: \"kubernetes.io/projected/7a1d7c17-72be-4c3a-9080-9a07033c93c7-kube-api-access-vwdhl\") pod \"community-operators-9c6fx\" (UID: \"7a1d7c17-72be-4c3a-9080-9a07033c93c7\") " pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.318267 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a1d7c17-72be-4c3a-9080-9a07033c93c7-utilities\") pod \"community-operators-9c6fx\" (UID: \"7a1d7c17-72be-4c3a-9080-9a07033c93c7\") " pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.318515 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a1d7c17-72be-4c3a-9080-9a07033c93c7-catalog-content\") pod \"community-operators-9c6fx\" (UID: \"7a1d7c17-72be-4c3a-9080-9a07033c93c7\") " pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.352398 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwdhl\" (UniqueName: \"kubernetes.io/projected/7a1d7c17-72be-4c3a-9080-9a07033c93c7-kube-api-access-vwdhl\") pod \"community-operators-9c6fx\" (UID: \"7a1d7c17-72be-4c3a-9080-9a07033c93c7\") " pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.514423 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.895180 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9c6fx"] Dec 09 12:14:38 crc kubenswrapper[4849]: I1209 12:14:38.968240 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c6fx" event={"ID":"7a1d7c17-72be-4c3a-9080-9a07033c93c7","Type":"ContainerStarted","Data":"d2ecebce0e69d7d0f1c60e5fa07ae05ebb3836d0f743d66ea323ed92438cabee"} Dec 09 12:14:39 crc kubenswrapper[4849]: I1209 12:14:39.979936 4849 generic.go:334] "Generic (PLEG): container finished" podID="7a1d7c17-72be-4c3a-9080-9a07033c93c7" containerID="6f9f489428608c128a0e4b93af2ce8f0add1b396ac96dd04b6381c579ae9d7ec" exitCode=0 Dec 09 12:14:39 crc kubenswrapper[4849]: I1209 12:14:39.980036 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c6fx" event={"ID":"7a1d7c17-72be-4c3a-9080-9a07033c93c7","Type":"ContainerDied","Data":"6f9f489428608c128a0e4b93af2ce8f0add1b396ac96dd04b6381c579ae9d7ec"} Dec 09 12:14:40 crc kubenswrapper[4849]: I1209 12:14:40.997669 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c6fx" event={"ID":"7a1d7c17-72be-4c3a-9080-9a07033c93c7","Type":"ContainerStarted","Data":"3145dc8bfb10bfb6fc0357019076dbba2d71dbe1478241cc3861bdfea4a76874"} Dec 09 12:14:42 crc kubenswrapper[4849]: I1209 12:14:42.008316 4849 generic.go:334] "Generic (PLEG): container finished" podID="7a1d7c17-72be-4c3a-9080-9a07033c93c7" containerID="3145dc8bfb10bfb6fc0357019076dbba2d71dbe1478241cc3861bdfea4a76874" exitCode=0 Dec 09 12:14:42 crc kubenswrapper[4849]: I1209 12:14:42.008634 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c6fx" event={"ID":"7a1d7c17-72be-4c3a-9080-9a07033c93c7","Type":"ContainerDied","Data":"3145dc8bfb10bfb6fc0357019076dbba2d71dbe1478241cc3861bdfea4a76874"} Dec 09 12:14:44 crc kubenswrapper[4849]: I1209 12:14:44.034328 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c6fx" event={"ID":"7a1d7c17-72be-4c3a-9080-9a07033c93c7","Type":"ContainerStarted","Data":"371e8c6f44c8c93a86663f8117d752cf8f84e2f747af3547cc94787dfde4b5ba"} Dec 09 12:14:44 crc kubenswrapper[4849]: I1209 12:14:44.060801 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9c6fx" podStartSLOduration=3.4313343 podStartE2EDuration="6.060785457s" podCreationTimestamp="2025-12-09 12:14:38 +0000 UTC" firstStartedPulling="2025-12-09 12:14:39.982089187 +0000 UTC m=+2862.521973503" lastFinishedPulling="2025-12-09 12:14:42.611540344 +0000 UTC m=+2865.151424660" observedRunningTime="2025-12-09 12:14:44.059167117 +0000 UTC m=+2866.599051453" watchObservedRunningTime="2025-12-09 12:14:44.060785457 +0000 UTC m=+2866.600669773" Dec 09 12:14:48 crc kubenswrapper[4849]: I1209 12:14:48.514880 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:48 crc kubenswrapper[4849]: I1209 12:14:48.517621 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:48 crc kubenswrapper[4849]: I1209 12:14:48.569766 4849 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:49 crc kubenswrapper[4849]: I1209 12:14:49.132015 4849 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:49 crc kubenswrapper[4849]: I1209 12:14:49.196336 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9c6fx"] Dec 09 12:14:51 crc kubenswrapper[4849]: I1209 12:14:51.100281 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9c6fx" podUID="7a1d7c17-72be-4c3a-9080-9a07033c93c7" containerName="registry-server" containerID="cri-o://371e8c6f44c8c93a86663f8117d752cf8f84e2f747af3547cc94787dfde4b5ba" gracePeriod=2 Dec 09 12:14:51 crc kubenswrapper[4849]: I1209 12:14:51.132754 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:14:51 crc kubenswrapper[4849]: I1209 12:14:51.132818 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:14:51 crc kubenswrapper[4849]: I1209 12:14:51.812296 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:51 crc kubenswrapper[4849]: I1209 12:14:51.842802 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwdhl\" (UniqueName: \"kubernetes.io/projected/7a1d7c17-72be-4c3a-9080-9a07033c93c7-kube-api-access-vwdhl\") pod \"7a1d7c17-72be-4c3a-9080-9a07033c93c7\" (UID: \"7a1d7c17-72be-4c3a-9080-9a07033c93c7\") " Dec 09 12:14:51 crc kubenswrapper[4849]: I1209 12:14:51.842926 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a1d7c17-72be-4c3a-9080-9a07033c93c7-utilities\") pod \"7a1d7c17-72be-4c3a-9080-9a07033c93c7\" (UID: \"7a1d7c17-72be-4c3a-9080-9a07033c93c7\") " Dec 09 12:14:51 crc kubenswrapper[4849]: I1209 12:14:51.843062 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a1d7c17-72be-4c3a-9080-9a07033c93c7-catalog-content\") pod \"7a1d7c17-72be-4c3a-9080-9a07033c93c7\" (UID: \"7a1d7c17-72be-4c3a-9080-9a07033c93c7\") " Dec 09 12:14:51 crc kubenswrapper[4849]: I1209 12:14:51.849274 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a1d7c17-72be-4c3a-9080-9a07033c93c7-kube-api-access-vwdhl" (OuterVolumeSpecName: "kube-api-access-vwdhl") pod "7a1d7c17-72be-4c3a-9080-9a07033c93c7" (UID: "7a1d7c17-72be-4c3a-9080-9a07033c93c7"). InnerVolumeSpecName "kube-api-access-vwdhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:14:51 crc kubenswrapper[4849]: I1209 12:14:51.850363 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a1d7c17-72be-4c3a-9080-9a07033c93c7-utilities" (OuterVolumeSpecName: "utilities") pod "7a1d7c17-72be-4c3a-9080-9a07033c93c7" (UID: "7a1d7c17-72be-4c3a-9080-9a07033c93c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:14:51 crc kubenswrapper[4849]: I1209 12:14:51.902015 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a1d7c17-72be-4c3a-9080-9a07033c93c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a1d7c17-72be-4c3a-9080-9a07033c93c7" (UID: "7a1d7c17-72be-4c3a-9080-9a07033c93c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:14:51 crc kubenswrapper[4849]: I1209 12:14:51.945199 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwdhl\" (UniqueName: \"kubernetes.io/projected/7a1d7c17-72be-4c3a-9080-9a07033c93c7-kube-api-access-vwdhl\") on node \"crc\" DevicePath \"\"" Dec 09 12:14:51 crc kubenswrapper[4849]: I1209 12:14:51.945245 4849 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a1d7c17-72be-4c3a-9080-9a07033c93c7-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:14:51 crc kubenswrapper[4849]: I1209 12:14:51.945256 4849 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a1d7c17-72be-4c3a-9080-9a07033c93c7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.111850 4849 generic.go:334] "Generic (PLEG): container finished" podID="7a1d7c17-72be-4c3a-9080-9a07033c93c7" containerID="371e8c6f44c8c93a86663f8117d752cf8f84e2f747af3547cc94787dfde4b5ba" exitCode=0 Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.111905 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c6fx" event={"ID":"7a1d7c17-72be-4c3a-9080-9a07033c93c7","Type":"ContainerDied","Data":"371e8c6f44c8c93a86663f8117d752cf8f84e2f747af3547cc94787dfde4b5ba"} Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.111941 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c6fx" event={"ID":"7a1d7c17-72be-4c3a-9080-9a07033c93c7","Type":"ContainerDied","Data":"d2ecebce0e69d7d0f1c60e5fa07ae05ebb3836d0f743d66ea323ed92438cabee"} Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.111950 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c6fx" Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.111964 4849 scope.go:117] "RemoveContainer" containerID="371e8c6f44c8c93a86663f8117d752cf8f84e2f747af3547cc94787dfde4b5ba" Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.137866 4849 scope.go:117] "RemoveContainer" containerID="3145dc8bfb10bfb6fc0357019076dbba2d71dbe1478241cc3861bdfea4a76874" Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.147032 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9c6fx"] Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.167102 4849 scope.go:117] "RemoveContainer" containerID="6f9f489428608c128a0e4b93af2ce8f0add1b396ac96dd04b6381c579ae9d7ec" Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.171719 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9c6fx"] Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.210166 4849 scope.go:117] "RemoveContainer" containerID="371e8c6f44c8c93a86663f8117d752cf8f84e2f747af3547cc94787dfde4b5ba" Dec 09 12:14:52 crc kubenswrapper[4849]: E1209 12:14:52.210743 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"371e8c6f44c8c93a86663f8117d752cf8f84e2f747af3547cc94787dfde4b5ba\": container with ID starting with 371e8c6f44c8c93a86663f8117d752cf8f84e2f747af3547cc94787dfde4b5ba not found: ID does not exist" containerID="371e8c6f44c8c93a86663f8117d752cf8f84e2f747af3547cc94787dfde4b5ba" Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.210804 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371e8c6f44c8c93a86663f8117d752cf8f84e2f747af3547cc94787dfde4b5ba"} err="failed to get container status \"371e8c6f44c8c93a86663f8117d752cf8f84e2f747af3547cc94787dfde4b5ba\": rpc error: code = NotFound desc = could not find container \"371e8c6f44c8c93a86663f8117d752cf8f84e2f747af3547cc94787dfde4b5ba\": container with ID starting with 371e8c6f44c8c93a86663f8117d752cf8f84e2f747af3547cc94787dfde4b5ba not found: ID does not exist" Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.210839 4849 scope.go:117] "RemoveContainer" containerID="3145dc8bfb10bfb6fc0357019076dbba2d71dbe1478241cc3861bdfea4a76874" Dec 09 12:14:52 crc kubenswrapper[4849]: E1209 12:14:52.211313 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3145dc8bfb10bfb6fc0357019076dbba2d71dbe1478241cc3861bdfea4a76874\": container with ID starting with 3145dc8bfb10bfb6fc0357019076dbba2d71dbe1478241cc3861bdfea4a76874 not found: ID does not exist" containerID="3145dc8bfb10bfb6fc0357019076dbba2d71dbe1478241cc3861bdfea4a76874" Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.211343 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3145dc8bfb10bfb6fc0357019076dbba2d71dbe1478241cc3861bdfea4a76874"} err="failed to get container status \"3145dc8bfb10bfb6fc0357019076dbba2d71dbe1478241cc3861bdfea4a76874\": rpc error: code = NotFound desc = could not find container \"3145dc8bfb10bfb6fc0357019076dbba2d71dbe1478241cc3861bdfea4a76874\": container with ID starting with 3145dc8bfb10bfb6fc0357019076dbba2d71dbe1478241cc3861bdfea4a76874 not found: ID does not exist" Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.211380 4849 scope.go:117] "RemoveContainer" containerID="6f9f489428608c128a0e4b93af2ce8f0add1b396ac96dd04b6381c579ae9d7ec" Dec 09 12:14:52 crc kubenswrapper[4849]: E1209 12:14:52.211783 4849 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f9f489428608c128a0e4b93af2ce8f0add1b396ac96dd04b6381c579ae9d7ec\": container with ID starting with 6f9f489428608c128a0e4b93af2ce8f0add1b396ac96dd04b6381c579ae9d7ec not found: ID does not exist" containerID="6f9f489428608c128a0e4b93af2ce8f0add1b396ac96dd04b6381c579ae9d7ec" Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.211843 4849 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f9f489428608c128a0e4b93af2ce8f0add1b396ac96dd04b6381c579ae9d7ec"} err="failed to get container status \"6f9f489428608c128a0e4b93af2ce8f0add1b396ac96dd04b6381c579ae9d7ec\": rpc error: code = NotFound desc = could not find container \"6f9f489428608c128a0e4b93af2ce8f0add1b396ac96dd04b6381c579ae9d7ec\": container with ID starting with 6f9f489428608c128a0e4b93af2ce8f0add1b396ac96dd04b6381c579ae9d7ec not found: ID does not exist" Dec 09 12:14:52 crc kubenswrapper[4849]: I1209 12:14:52.546046 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a1d7c17-72be-4c3a-9080-9a07033c93c7" path="/var/lib/kubelet/pods/7a1d7c17-72be-4c3a-9080-9a07033c93c7/volumes" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.176677 4849 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4"] Dec 09 12:15:00 crc kubenswrapper[4849]: E1209 12:15:00.177699 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1d7c17-72be-4c3a-9080-9a07033c93c7" containerName="registry-server" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.177714 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1d7c17-72be-4c3a-9080-9a07033c93c7" containerName="registry-server" Dec 09 12:15:00 crc kubenswrapper[4849]: E1209 12:15:00.177739 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1d7c17-72be-4c3a-9080-9a07033c93c7" containerName="extract-utilities" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.177748 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1d7c17-72be-4c3a-9080-9a07033c93c7" containerName="extract-utilities" Dec 09 12:15:00 crc kubenswrapper[4849]: E1209 12:15:00.177761 4849 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1d7c17-72be-4c3a-9080-9a07033c93c7" containerName="extract-content" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.177770 4849 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1d7c17-72be-4c3a-9080-9a07033c93c7" containerName="extract-content" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.178024 4849 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a1d7c17-72be-4c3a-9080-9a07033c93c7" containerName="registry-server" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.178816 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.183317 4849 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.183671 4849 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.202383 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4"] Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.244090 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a7bcf4a-247f-45b3-a846-48fd25e31409-secret-volume\") pod \"collect-profiles-29421375-b5jx4\" (UID: \"9a7bcf4a-247f-45b3-a846-48fd25e31409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.244132 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a7bcf4a-247f-45b3-a846-48fd25e31409-config-volume\") pod \"collect-profiles-29421375-b5jx4\" (UID: \"9a7bcf4a-247f-45b3-a846-48fd25e31409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.244182 4849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmmpm\" (UniqueName: \"kubernetes.io/projected/9a7bcf4a-247f-45b3-a846-48fd25e31409-kube-api-access-lmmpm\") pod \"collect-profiles-29421375-b5jx4\" (UID: \"9a7bcf4a-247f-45b3-a846-48fd25e31409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.345564 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a7bcf4a-247f-45b3-a846-48fd25e31409-secret-volume\") pod \"collect-profiles-29421375-b5jx4\" (UID: \"9a7bcf4a-247f-45b3-a846-48fd25e31409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.345611 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a7bcf4a-247f-45b3-a846-48fd25e31409-config-volume\") pod \"collect-profiles-29421375-b5jx4\" (UID: \"9a7bcf4a-247f-45b3-a846-48fd25e31409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.345664 4849 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmmpm\" (UniqueName: \"kubernetes.io/projected/9a7bcf4a-247f-45b3-a846-48fd25e31409-kube-api-access-lmmpm\") pod \"collect-profiles-29421375-b5jx4\" (UID: \"9a7bcf4a-247f-45b3-a846-48fd25e31409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.346510 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a7bcf4a-247f-45b3-a846-48fd25e31409-config-volume\") pod \"collect-profiles-29421375-b5jx4\" (UID: \"9a7bcf4a-247f-45b3-a846-48fd25e31409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.350940 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a7bcf4a-247f-45b3-a846-48fd25e31409-secret-volume\") pod \"collect-profiles-29421375-b5jx4\" (UID: \"9a7bcf4a-247f-45b3-a846-48fd25e31409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.365943 4849 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmmpm\" (UniqueName: \"kubernetes.io/projected/9a7bcf4a-247f-45b3-a846-48fd25e31409-kube-api-access-lmmpm\") pod \"collect-profiles-29421375-b5jx4\" (UID: \"9a7bcf4a-247f-45b3-a846-48fd25e31409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.503379 4849 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" Dec 09 12:15:00 crc kubenswrapper[4849]: I1209 12:15:00.978815 4849 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4"] Dec 09 12:15:01 crc kubenswrapper[4849]: I1209 12:15:01.207264 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" event={"ID":"9a7bcf4a-247f-45b3-a846-48fd25e31409","Type":"ContainerStarted","Data":"1724283bb76f936bdfb3df96648e949270297341c28b397c653983b934ed1377"} Dec 09 12:15:01 crc kubenswrapper[4849]: I1209 12:15:01.207328 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" event={"ID":"9a7bcf4a-247f-45b3-a846-48fd25e31409","Type":"ContainerStarted","Data":"dc78452ac22b4e2794b33464b66bade57d21078f9e5cd742a94f3c82aa0e18b6"} Dec 09 12:15:01 crc kubenswrapper[4849]: I1209 12:15:01.230354 4849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" podStartSLOduration=1.230328442 podStartE2EDuration="1.230328442s" podCreationTimestamp="2025-12-09 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:15:01.227126252 +0000 UTC m=+2883.767010568" watchObservedRunningTime="2025-12-09 12:15:01.230328442 +0000 UTC m=+2883.770212768" Dec 09 12:15:02 crc kubenswrapper[4849]: I1209 12:15:02.224661 4849 generic.go:334] "Generic (PLEG): container finished" podID="9a7bcf4a-247f-45b3-a846-48fd25e31409" containerID="1724283bb76f936bdfb3df96648e949270297341c28b397c653983b934ed1377" exitCode=0 Dec 09 12:15:02 crc kubenswrapper[4849]: I1209 12:15:02.224921 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" event={"ID":"9a7bcf4a-247f-45b3-a846-48fd25e31409","Type":"ContainerDied","Data":"1724283bb76f936bdfb3df96648e949270297341c28b397c653983b934ed1377"} Dec 09 12:15:03 crc kubenswrapper[4849]: I1209 12:15:03.552812 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" Dec 09 12:15:03 crc kubenswrapper[4849]: I1209 12:15:03.605186 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmmpm\" (UniqueName: \"kubernetes.io/projected/9a7bcf4a-247f-45b3-a846-48fd25e31409-kube-api-access-lmmpm\") pod \"9a7bcf4a-247f-45b3-a846-48fd25e31409\" (UID: \"9a7bcf4a-247f-45b3-a846-48fd25e31409\") " Dec 09 12:15:03 crc kubenswrapper[4849]: I1209 12:15:03.605291 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a7bcf4a-247f-45b3-a846-48fd25e31409-secret-volume\") pod \"9a7bcf4a-247f-45b3-a846-48fd25e31409\" (UID: \"9a7bcf4a-247f-45b3-a846-48fd25e31409\") " Dec 09 12:15:03 crc kubenswrapper[4849]: I1209 12:15:03.605427 4849 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a7bcf4a-247f-45b3-a846-48fd25e31409-config-volume\") pod \"9a7bcf4a-247f-45b3-a846-48fd25e31409\" (UID: \"9a7bcf4a-247f-45b3-a846-48fd25e31409\") " Dec 09 12:15:03 crc kubenswrapper[4849]: I1209 12:15:03.614606 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a7bcf4a-247f-45b3-a846-48fd25e31409-kube-api-access-lmmpm" (OuterVolumeSpecName: "kube-api-access-lmmpm") pod "9a7bcf4a-247f-45b3-a846-48fd25e31409" (UID: "9a7bcf4a-247f-45b3-a846-48fd25e31409"). InnerVolumeSpecName "kube-api-access-lmmpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:15:03 crc kubenswrapper[4849]: I1209 12:15:03.617362 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a7bcf4a-247f-45b3-a846-48fd25e31409-config-volume" (OuterVolumeSpecName: "config-volume") pod "9a7bcf4a-247f-45b3-a846-48fd25e31409" (UID: "9a7bcf4a-247f-45b3-a846-48fd25e31409"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:15:03 crc kubenswrapper[4849]: I1209 12:15:03.620566 4849 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a7bcf4a-247f-45b3-a846-48fd25e31409-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9a7bcf4a-247f-45b3-a846-48fd25e31409" (UID: "9a7bcf4a-247f-45b3-a846-48fd25e31409"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:15:03 crc kubenswrapper[4849]: I1209 12:15:03.709149 4849 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a7bcf4a-247f-45b3-a846-48fd25e31409-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:03 crc kubenswrapper[4849]: I1209 12:15:03.709184 4849 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmmpm\" (UniqueName: \"kubernetes.io/projected/9a7bcf4a-247f-45b3-a846-48fd25e31409-kube-api-access-lmmpm\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:03 crc kubenswrapper[4849]: I1209 12:15:03.709196 4849 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a7bcf4a-247f-45b3-a846-48fd25e31409-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:04 crc kubenswrapper[4849]: I1209 12:15:04.241157 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" event={"ID":"9a7bcf4a-247f-45b3-a846-48fd25e31409","Type":"ContainerDied","Data":"dc78452ac22b4e2794b33464b66bade57d21078f9e5cd742a94f3c82aa0e18b6"} Dec 09 12:15:04 crc kubenswrapper[4849]: I1209 12:15:04.241477 4849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc78452ac22b4e2794b33464b66bade57d21078f9e5cd742a94f3c82aa0e18b6" Dec 09 12:15:04 crc kubenswrapper[4849]: I1209 12:15:04.241207 4849 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-b5jx4" Dec 09 12:15:04 crc kubenswrapper[4849]: I1209 12:15:04.313258 4849 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s"] Dec 09 12:15:04 crc kubenswrapper[4849]: I1209 12:15:04.321593 4849 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421330-tsz9s"] Dec 09 12:15:04 crc kubenswrapper[4849]: I1209 12:15:04.549601 4849 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="136005bd-018c-43bc-b768-5f036f7e2c40" path="/var/lib/kubelet/pods/136005bd-018c-43bc-b768-5f036f7e2c40/volumes" Dec 09 12:15:07 crc kubenswrapper[4849]: I1209 12:15:07.660912 4849 scope.go:117] "RemoveContainer" containerID="96db2af45ff8acb81b86ef373ccf1adb3af357e38745c8cf08077d88580ee321" Dec 09 12:15:21 crc kubenswrapper[4849]: I1209 12:15:21.132221 4849 patch_prober.go:28] interesting pod/machine-config-daemon-89kpx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:15:21 crc kubenswrapper[4849]: I1209 12:15:21.132800 4849 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:15:21 crc kubenswrapper[4849]: I1209 12:15:21.132845 4849 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" Dec 09 12:15:21 crc kubenswrapper[4849]: I1209 12:15:21.133555 4849 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b4cd4ee0fd6b8ad7eb5e2d06bb15a3a9d7257eca421494ffea9365ea5218f56"} pod="openshift-machine-config-operator/machine-config-daemon-89kpx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:15:21 crc kubenswrapper[4849]: I1209 12:15:21.133616 4849 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" containerName="machine-config-daemon" containerID="cri-o://2b4cd4ee0fd6b8ad7eb5e2d06bb15a3a9d7257eca421494ffea9365ea5218f56" gracePeriod=600 Dec 09 12:15:21 crc kubenswrapper[4849]: E1209 12:15:21.252098 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:15:21 crc kubenswrapper[4849]: I1209 12:15:21.393567 4849 generic.go:334] "Generic (PLEG): container finished" podID="157c6f6c-042b-4da3-934e-a08474e56486" containerID="2b4cd4ee0fd6b8ad7eb5e2d06bb15a3a9d7257eca421494ffea9365ea5218f56" exitCode=0 Dec 09 12:15:21 crc kubenswrapper[4849]: I1209 12:15:21.393875 4849 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" event={"ID":"157c6f6c-042b-4da3-934e-a08474e56486","Type":"ContainerDied","Data":"2b4cd4ee0fd6b8ad7eb5e2d06bb15a3a9d7257eca421494ffea9365ea5218f56"} Dec 09 12:15:21 crc kubenswrapper[4849]: I1209 12:15:21.393910 4849 scope.go:117] "RemoveContainer" containerID="66280f7ade75804b6c0096c7c66616b5d7da643ae6d9d19df6728655528ef876" Dec 09 12:15:21 crc kubenswrapper[4849]: I1209 12:15:21.394643 4849 scope.go:117] "RemoveContainer" containerID="2b4cd4ee0fd6b8ad7eb5e2d06bb15a3a9d7257eca421494ffea9365ea5218f56" Dec 09 12:15:21 crc kubenswrapper[4849]: E1209 12:15:21.394908 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:15:34 crc kubenswrapper[4849]: I1209 12:15:34.541487 4849 scope.go:117] "RemoveContainer" containerID="2b4cd4ee0fd6b8ad7eb5e2d06bb15a3a9d7257eca421494ffea9365ea5218f56" Dec 09 12:15:34 crc kubenswrapper[4849]: E1209 12:15:34.542583 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:15:45 crc kubenswrapper[4849]: I1209 12:15:45.536894 4849 scope.go:117] "RemoveContainer" containerID="2b4cd4ee0fd6b8ad7eb5e2d06bb15a3a9d7257eca421494ffea9365ea5218f56" Dec 09 12:15:45 crc kubenswrapper[4849]: E1209 12:15:45.537685 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:15:59 crc kubenswrapper[4849]: I1209 12:15:59.537387 4849 scope.go:117] "RemoveContainer" containerID="2b4cd4ee0fd6b8ad7eb5e2d06bb15a3a9d7257eca421494ffea9365ea5218f56" Dec 09 12:15:59 crc kubenswrapper[4849]: E1209 12:15:59.538294 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486" Dec 09 12:16:10 crc kubenswrapper[4849]: I1209 12:16:10.536327 4849 scope.go:117] "RemoveContainer" containerID="2b4cd4ee0fd6b8ad7eb5e2d06bb15a3a9d7257eca421494ffea9365ea5218f56" Dec 09 12:16:10 crc kubenswrapper[4849]: E1209 12:16:10.537190 4849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-89kpx_openshift-machine-config-operator(157c6f6c-042b-4da3-934e-a08474e56486)\"" pod="openshift-machine-config-operator/machine-config-daemon-89kpx" podUID="157c6f6c-042b-4da3-934e-a08474e56486"